Mode choice model parameters estimation
Strnad, Irena
2010-01-01
The present work focuses on parameter estimation of two mode choice models: multinomial logit and EVA 2 model, where four different modes and five different trip purposes are taken into account. Mode choice model discusses the behavioral aspect of mode choice making and enables its application to a traffic model. Mode choice model includes mode choice affecting trip factors by using each mode and their relative importance to choice made. When trip factor values are known, it...
PARAMETER ESTIMATION OF ENGINEERING TURBULENCE MODEL
Institute of Scientific and Technical Information of China (English)
钱炜祺; 蔡金狮
2001-01-01
A parameter estimation algorithm is introduced and used to determine the parameters in the standard k-ε two equation turbulence model (SKE). It can be found from the estimation results that although the parameter estimation method is an effective method to determine model parameters, it is difficult to obtain a set of parameters for SKE to suit all kinds of separated flow and a modification of the turbulence model structure should be considered. So, a new nonlinear k-ε two-equation model (NNKE) is put forward in this paper and the corresponding parameter estimation technique is applied to determine the model parameters. By implementing the NNKE to solve some engineering turbulent flows, it is shown that NNKE is more accurate and versatile than SKE. Thus, the success of NNKE implies that the parameter estimation technique may have a bright prospect in engineering turbulence model research.
On parameter estimation in deformable models
DEFF Research Database (Denmark)
Fisker, Rune; Carstensen, Jens Michael
1998-01-01
Deformable templates have been intensively studied in image analysis through the last decade, but despite its significance the estimation of model parameters has received little attention. We present a method for supervised and unsupervised model parameter estimation using a general Bayesian...... method is based on a modified version of the EM algorithm. Experimental results for a deformable template used for textile inspection are presented...
Parameter Estimation, Model Reduction and Quantum Filtering
Chase, Bradley A
2009-01-01
This dissertation explores the topics of parameter estimation and model reduction in the context of quantum filtering. Chapters 2 and 3 provide a review of classical and quantum probability theory, stochastic calculus and filtering. Chapter 4 studies the problem of quantum parameter estimation and introduces the quantum particle filter as a practical computational method for parameter estimation via continuous measurement. Chapter 5 applies these techniques in magnetometry and studies the estimator's uncertainty scalings in a double-pass atomic magnetometer. Chapter 6 presents an efficient feedback controller for continuous-time quantum error correction. Chapter 7 presents an exact model of symmetric processes of collective qubit systems.
Parameter Estimation of Partial Differential Equation Models
Xun, Xiaolei
2013-09-01
Partial differential equation (PDE) models are commonly used to model complex dynamic systems in applied sciences such as biology and finance. The forms of these PDE models are usually proposed by experts based on their prior knowledge and understanding of the dynamic system. Parameters in PDE models often have interesting scientific interpretations, but their values are often unknown and need to be estimated from the measurements of the dynamic system in the presence of measurement errors. Most PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus the computational load is high. In this article, we propose two methods to estimate parameters in PDE models: a parameter cascading method and a Bayesian approach. In both methods, the underlying dynamic process modeled with the PDE model is represented via basis function expansion. For the parameter cascading method, we develop two nested levels of optimization to estimate the PDE parameters. For the Bayesian method, we develop a joint model for data and the PDE and develop a novel hierarchical model allowing us to employ Markov chain Monte Carlo (MCMC) techniques to make posterior inference. Simulation studies show that the Bayesian method and parameter cascading method are comparable, and both outperform other available methods in terms of estimation accuracy. The two methods are demonstrated by estimating parameters in a PDE model from long-range infrared light detection and ranging data. Supplementary materials for this article are available online. © 2013 American Statistical Association.
PARAMETER ESTIMATION IN BREAD BAKING MODEL
Hadiyanto Hadiyanto; AJB van Boxtel
2012-01-01
Bread product quality is highly dependent to the baking process. A model for the development of product quality, which was obtained by using quantitative and qualitative relationships, was calibrated by experiments at a fixed baking temperature of 200°C alone and in combination with 100 W microwave powers. The model parameters were estimated in a stepwise procedure i.e. first, heat and mass transfer related parameters, then the parameters related to product transformations and finally pro...
Parameter Estimation for Thurstone Choice Models
Energy Technology Data Exchange (ETDEWEB)
Vojnovic, Milan [London School of Economics (United Kingdom); Yun, Seyoung [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-04-24
We consider the estimation accuracy of individual strength parameters of a Thurstone choice model when each input observation consists of a choice of one item from a set of two or more items (so called top-1 lists). This model accommodates the well-known choice models such as the Luce choice model for comparison sets of two or more items and the Bradley-Terry model for pair comparisons. We provide a tight characterization of the mean squared error of the maximum likelihood parameter estimator. We also provide similar characterizations for parameter estimators defined by a rank-breaking method, which amounts to deducing one or more pair comparisons from a comparison of two or more items, assuming independence of these pair comparisons, and maximizing a likelihood function derived under these assumptions. We also consider a related binary classification problem where each individual parameter takes value from a set of two possible values and the goal is to correctly classify all items within a prescribed classification error. The results of this paper shed light on how the parameter estimation accuracy depends on given Thurstone choice model and the structure of comparison sets. In particular, we found that for unbiased input comparison sets of a given cardinality, when in expectation each comparison set of given cardinality occurs the same number of times, for a broad class of Thurstone choice models, the mean squared error decreases with the cardinality of comparison sets, but only marginally according to a diminishing returns relation. On the other hand, we found that there exist Thurstone choice models for which the mean squared error of the maximum likelihood parameter estimator can decrease much faster with the cardinality of comparison sets. We report empirical evaluation of some claims and key parameters revealed by theory using both synthetic and real-world input data from some popular sport competitions and online labor platforms.
Robust estimation of hydrological model parameters
Directory of Open Access Journals (Sweden)
A. Bárdossy
2008-11-01
Full Text Available The estimation of hydrological model parameters is a challenging task. With increasing capacity of computational power several complex optimization algorithms have emerged, but none of the algorithms gives a unique and very best parameter vector. The parameters of fitted hydrological models depend upon the input data. The quality of input data cannot be assured as there may be measurement errors for both input and state variables. In this study a methodology has been developed to find a set of robust parameter vectors for a hydrological model. To see the effect of observational error on parameters, stochastically generated synthetic measurement errors were applied to observed discharge and temperature data. With this modified data, the model was calibrated and the effect of measurement errors on parameters was analysed. It was found that the measurement errors have a significant effect on the best performing parameter vector. The erroneous data led to very different optimal parameter vectors. To overcome this problem and to find a set of robust parameter vectors, a geometrical approach based on Tukey's half space depth was used. The depth of the set of N randomly generated parameters was calculated with respect to the set with the best model performance (Nash-Sutclife efficiency was used for this study for each parameter vector. Based on the depth of parameter vectors, one can find a set of robust parameter vectors. The results show that the parameters chosen according to the above criteria have low sensitivity and perform well when transfered to a different time period. The method is demonstrated on the upper Neckar catchment in Germany. The conceptual HBV model was used for this study.
PARAMETER ESTIMATION IN BREAD BAKING MODEL
Directory of Open Access Journals (Sweden)
Hadiyanto Hadiyanto
2012-05-01
Full Text Available Bread product quality is highly dependent to the baking process. A model for the development of product quality, which was obtained by using quantitative and qualitative relationships, was calibrated by experiments at a fixed baking temperature of 200°C alone and in combination with 100 W microwave powers. The model parameters were estimated in a stepwise procedure i.e. first, heat and mass transfer related parameters, then the parameters related to product transformations and finally product quality parameters. There was a fair agreement between the calibrated model results and the experimental data. The results showed that the applied simple qualitative relationships for quality performed above expectation. Furthermore, it was confirmed that the microwave input is most meaningful for the internal product properties and not for the surface properties as crispness and color. The model with adjusted parameters was applied in a quality driven food process design procedure to derive a dynamic operation pattern, which was subsequently tested experimentally to calibrate the model. Despite the limited calibration with fixed operation settings, the model predicted well on the behavior under dynamic convective operation and on combined convective and microwave operation. It was expected that the suitability between model and baking system could be improved further by performing calibration experiments at higher temperature and various microwave power levels. Abstrak PERKIRAAN PARAMETER DALAM MODEL UNTUK PROSES BAKING ROTI. Kualitas produk roti sangat tergantung pada proses baking yang digunakan. Suatu model yang telah dikembangkan dengan metode kualitatif dan kuantitaif telah dikalibrasi dengan percobaan pada temperatur 200oC dan dengan kombinasi dengan mikrowave pada 100 Watt. Parameter-parameter model diestimasi dengan prosedur bertahap yaitu pertama, parameter pada model perpindahan masa dan panas, parameter pada model transformasi, dan
DEFF Research Database (Denmark)
2011-01-01
of optimisation techniques coupled with dynamic solution of the underlying model. Linear and nonlinear approaches to parameter estimation are investigated. There is also the application of maximum likelihood principles in the estimation of parameters, as well as the use of orthogonal collocation to generate a set......In this chapter the importance of parameter estimation in model development is illustrated through various applications related to reaction systems. In particular, rate constants in a reaction system are obtained through parameter estimation methods. These approaches often require the application...... of algebraic equations as the basis for parameter estimation.These approaches are illustrated using estimations of kinetic constants from reaction system models....
Parameter estimation, model reduction and quantum filtering
Chase, Bradley A.
This thesis explores the topics of parameter estimation and model reduction in the context of quantum filtering. The last is a mathematically rigorous formulation of continuous quantum measurement, in which a stream of auxiliary quantum systems is used to infer the state of a target quantum system. Fundamental quantum uncertainties appear as noise which corrupts the probe observations and therefore must be filtered in order to extract information about the target system. This is analogous to the classical filtering problem in which techniques of inference are used to process noisy observations of a system in order to estimate its state. Given the clear similarities between the two filtering problems, I devote the beginning of this thesis to a review of classical and quantum probability theory, stochastic calculus and filtering. This allows for a mathematically rigorous and technically adroit presentation of the quantum filtering problem and solution. Given this foundation, I next consider the related problem of quantum parameter estimation, in which one seeks to infer the strength of a parameter that drives the evolution of a probe quantum system. By embedding this problem in the state estimation problem solved by the quantum filter, I present the optimal Bayesian estimator for a parameter when given continuous measurements of the probe system to which it couples. For cases when the probe takes on a finite number of values, I review a set of sufficient conditions for asymptotic convergence of the estimator. For a continuous-valued parameter, I present a computational method called quantum particle filtering for practical estimation of the parameter. Using these methods, I then study the particular problem of atomic magnetometry and review an experimental method for potentially reducing the uncertainty in the estimate of the magnetic field beyond the standard quantum limit. The technique involves double-passing a probe laser field through the atomic system, giving
Estimation of Model Parameters for Steerable Needles
Park, Wooram; Reed, Kyle B.; Okamura, Allison M.; Chirikjian, Gregory S.
2010-01-01
Flexible needles with bevel tips are being developed as useful tools for minimally invasive surgery and percutaneous therapy. When such a needle is inserted into soft tissue, it bends due to the asymmetric geometry of the bevel tip. This insertion with bending is not completely repeatable. We characterize the deviations in needle tip pose (position and orientation) by performing repeated needle insertions into artificial tissue. The base of the needle is pushed at a constant speed without rotating, and the covariance of the distribution of the needle tip pose is computed from experimental data. We develop the closed-form equations to describe how the covariance varies with different model parameters. We estimate the model parameters by matching the closed-form covariance and the experimentally obtained covariance. In this work, we use a needle model modified from a previously developed model with two noise parameters. The modified needle model uses three noise parameters to better capture the stochastic behavior of the needle insertion. The modified needle model provides an improvement of the covariance error from 26.1% to 6.55%. PMID:21643451
Estimation of Model Parameters for Steerable Needles.
Park, Wooram; Reed, Kyle B; Okamura, Allison M; Chirikjian, Gregory S
2010-01-01
Flexible needles with bevel tips are being developed as useful tools for minimally invasive surgery and percutaneous therapy. When such a needle is inserted into soft tissue, it bends due to the asymmetric geometry of the bevel tip. This insertion with bending is not completely repeatable. We characterize the deviations in needle tip pose (position and orientation) by performing repeated needle insertions into artificial tissue. The base of the needle is pushed at a constant speed without rotating, and the covariance of the distribution of the needle tip pose is computed from experimental data. We develop the closed-form equations to describe how the covariance varies with different model parameters. We estimate the model parameters by matching the closed-form covariance and the experimentally obtained covariance. In this work, we use a needle model modified from a previously developed model with two noise parameters. The modified needle model uses three noise parameters to better capture the stochastic behavior of the needle insertion. The modified needle model provides an improvement of the covariance error from 26.1% to 6.55%.
DEFF Research Database (Denmark)
Sales-Cruz, Mauricio; Heitzig, Martina; Cameron, Ian;
2011-01-01
of optimisation techniques coupled with dynamic solution of the underlying model. Linear and nonlinear approaches to parameter estimation are investigated. There is also the application of maximum likelihood principles in the estimation of parameters, as well as the use of orthogonal collocation to generate a set...
Parameter and Uncertainty Estimation in Groundwater Modelling
DEFF Research Database (Denmark)
Jensen, Jacob Birk
The data basis on which groundwater models are constructed is in general very incomplete, and this leads to uncertainty in model outcome. Groundwater models form the basis for many, often costly decisions and if these are to be made on solid grounds, the uncertainty attached to model results must...... be quantified. This study was motivated by the need to estimate the uncertainty involved in groundwater models.Chapter 2 presents an integrated surface/subsurface unstructured finite difference model that was developed and applied to a synthetic case study.The following two chapters concern calibration...... and uncertainty estimation. Essential issues relating to calibration are discussed. The classical regression methods are described; however, the main focus is on the Generalized Likelihood Uncertainty Estimation (GLUE) methodology. The next two chapters describe case studies in which the GLUE methodology...
Bayesian approach to decompression sickness model parameter estimation.
Howle, L E; Weber, P W; Nichols, J M
2017-03-01
We examine both maximum likelihood and Bayesian approaches for estimating probabilistic decompression sickness model parameters. Maximum likelihood estimation treats parameters as fixed values and determines the best estimate through repeated trials, whereas the Bayesian approach treats parameters as random variables and determines the parameter probability distributions. We would ultimately like to know the probability that a parameter lies in a certain range rather than simply make statements about the repeatability of our estimator. Although both represent powerful methods of inference, for models with complex or multi-peaked likelihoods, maximum likelihood parameter estimates can prove more difficult to interpret than the estimates of the parameter distributions provided by the Bayesian approach. For models of decompression sickness, we show that while these two estimation methods are complementary, the credible intervals generated by the Bayesian approach are more naturally suited to quantifying uncertainty in the model parameters.
Parameter estimation and error analysis in environmental modeling and computation
Kalmaz, E. E.
1986-01-01
A method for the estimation of parameters and error analysis in the development of nonlinear modeling for environmental impact assessment studies is presented. The modular computer program can interactively fit different nonlinear models to the same set of data, dynamically changing the error structure associated with observed values. Parameter estimation techniques and sequential estimation algorithms employed in parameter identification and model selection are first discussed. Then, least-square parameter estimation procedures are formulated, utilizing differential or integrated equations, and are used to define a model for association of error with experimentally observed data.
Estimation of Kinetic Parameters in an Automotive SCR Catalyst Model
DEFF Research Database (Denmark)
Åberg, Andreas; Widd, Anders; Abildskov, Jens;
2016-01-01
A challenge during the development of models for simulation of the automotive Selective Catalytic Reduction catalyst is the parameter estimation of the kinetic parameters, which can be time consuming and problematic. The parameter estimation is often carried out on small-scale reactor tests, or p...
Shape parameter estimate for a glottal model without time position
Degottex, Gilles; Roebel, Axel; Rodet, Xavier
2009-01-01
cote interne IRCAM: Degottex09a; None / None; National audience; From a recorded speech signal, we propose to estimate a shape parameter of a glottal model without estimating his time position. Indeed, the literature usually propose to estimate the time position first (ex. by detecting Glottal Closure Instants). The vocal-tract filter estimate is expressed as a minimum-phase envelope estimation after removing the glottal model and a standard lips radiation model. Since this filter is mainly b...
Weibull Parameters Estimation Based on Physics of Failure Model
DEFF Research Database (Denmark)
Kostandyan, Erik; Sørensen, John Dalsgaard
2012-01-01
Reliability estimation procedures are discussed for the example of fatigue development in solder joints using a physics of failure model. The accumulated damage is estimated based on a physics of failure model, the Rainflow counting algorithm and the Miner’s rule. A threshold model is used...... distribution. Methods from structural reliability analysis are used to model the uncertainties and to assess the reliability for fatigue failure. Maximum Likelihood and Least Square estimation techniques are used to estimate fatigue life distribution parameters....
Parameter estimation of hydrologic models using data assimilation
Kaheil, Y. H.
2005-12-01
The uncertainties associated with the modeling of hydrologic systems sometimes demand that data should be incorporated in an on-line fashion in order to understand the behavior of the system. This paper represents a Bayesian strategy to estimate parameters for hydrologic models in an iterative mode. The paper presents a modified technique called localized Bayesian recursive estimation (LoBaRE) that efficiently identifies the optimum parameter region, avoiding convergence to a single best parameter set. The LoBaRE methodology is tested for parameter estimation for two different types of models: a support vector machine (SVM) model for predicting soil moisture, and the Sacramento Soil Moisture Accounting (SAC-SMA) model for estimating streamflow. The SAC-SMA model has 13 parameters that must be determined. The SVM model has three parameters. Bayesian inference is used to estimate the best parameter set in an iterative fashion. This is done by narrowing the sampling space by imposing uncertainty bounds on the posterior best parameter set and/or updating the "parent" bounds based on their fitness. The new approach results in fast convergence towards the optimal parameter set using minimum training/calibration data and evaluation of fewer parameter sets. The efficacy of the localized methodology is also compared with the previously used Bayesian recursive estimation (BaRE) algorithm.
Estimating parameters for generalized mass action models with connectivity information
Directory of Open Access Journals (Sweden)
Voit Eberhard O
2009-05-01
Full Text Available Abstract Background Determining the parameters of a mathematical model from quantitative measurements is the main bottleneck of modelling biological systems. Parameter values can be estimated from steady-state data or from dynamic data. The nature of suitable data for these two types of estimation is rather different. For instance, estimations of parameter values in pathway models, such as kinetic orders, rate constants, flux control coefficients or elasticities, from steady-state data are generally based on experiments that measure how a biochemical system responds to small perturbations around the steady state. In contrast, parameter estimation from dynamic data requires time series measurements for all dependent variables. Almost no literature has so far discussed the combined use of both steady-state and transient data for estimating parameter values of biochemical systems. Results In this study we introduce a constrained optimization method for estimating parameter values of biochemical pathway models using steady-state information and transient measurements. The constraints are derived from the flux connectivity relationships of the system at the steady state. Two case studies demonstrate the estimation results with and without flux connectivity constraints. The unconstrained optimal estimates from dynamic data may fit the experiments well, but they do not necessarily maintain the connectivity relationships. As a consequence, individual fluxes may be misrepresented, which may cause problems in later extrapolations. By contrast, the constrained estimation accounting for flux connectivity information reduces this misrepresentation and thereby yields improved model parameters. Conclusion The method combines transient metabolic profiles and steady-state information and leads to the formulation of an inverse parameter estimation task as a constrained optimization problem. Parameter estimation and model selection are simultaneously carried out
A simulation of water pollution model parameter estimation
Kibler, J. F.
1976-01-01
A parameter estimation procedure for a water pollution transport model is elaborated. A two-dimensional instantaneous-release shear-diffusion model serves as representative of a simple transport process. Pollution concentration levels are arrived at via modeling of a remote-sensing system. The remote-sensed data are simulated by adding Gaussian noise to the concentration level values generated via the transport model. Model parameters are estimated from the simulated data using a least-squares batch processor. Resolution, sensor array size, and number and location of sensor readings can be found from the accuracies of the parameter estimates.
Estimation of shape model parameters for 3D surfaces
DEFF Research Database (Denmark)
Erbou, Søren Gylling Hemmingsen; Darkner, Sune; Fripp, Jurgen;
2008-01-01
Statistical shape models are widely used as a compact way of representing shape variation. Fitting a shape model to unseen data enables characterizing the data in terms of the model parameters. In this paper a Gauss-Newton optimization scheme is proposed to estimate shape model parameters of 3D s...
Parameter estimation of hidden periodic model in random fields
Institute of Scientific and Technical Information of China (English)
何书元
1999-01-01
Two-dimensional hidden periodic model is an important model in random fields. The model is used in the field of two-dimensional signal processing, prediction and spectral analysis. A method of estimating the parameters for the model is designed. The strong consistency of the estimators is proved.
Modeling and Parameter Estimation of a Small Wind Generation System
Directory of Open Access Journals (Sweden)
Carlos A. Ramírez Gómez
2013-11-01
Full Text Available The modeling and parameter estimation of a small wind generation system is presented in this paper. The system consists of a wind turbine, a permanent magnet synchronous generator, a three phase rectifier, and a direct current load. In order to estimate the parameters wind speed data are registered in a weather station located in the Fraternidad Campus at ITM. Wind speed data were applied to a reference model programed with PSIM software. From that simulation, variables were registered to estimate the parameters. The wind generation system model together with the estimated parameters is an excellent representation of the detailed model, but the estimated model offers a higher flexibility than the programed model in PSIM software.
NEW DOCTORAL DEGREE Parameter estimation problem in the Weibull model
Marković, Darija
2009-01-01
In this dissertation we consider the problem of the existence of best parameters in the Weibull model, one of the most widely used statistical models in reliability theory and life data theory. Particular attention is given to a 3-parameter Weibull model. We have listed some of the many applications of this model. We have described some of the classical methods for estimating parameters of the Weibull model, two graphical methods (Weibull probability plot and hazard plot), and two analyt...
Baker Syed; Poskar C; Junker Björn
2011-01-01
Abstract In systems biology, experimentally measured parameters are not always available, necessitating the use of computationally based parameter estimation. In order to rely on estimated parameters, it is critical to first determine which parameters can be estimated for a given model and measurement set. This is done with parameter identifiability analysis. A kinetic model of the sucrose accumulation in the sugar cane culm tissue developed by Rohwer et al. was taken as a test case model. Wh...
Parameter Estimation and Experimental Design in Groundwater Modeling
Institute of Scientific and Technical Information of China (English)
SUN Ne-zheng
2004-01-01
This paper reviews the latest developments on parameter estimation and experimental design in the field of groundwater modeling. Special considerations are given when the structure of the identified parameter is complex and unknown. A new methodology for constructing useful groundwater models is described, which is based on the quantitative relationships among the complexity of model structure, the identifiability of parameter, the sufficiency of data, and the reliability of model application.
PARAMETER ESTIMATION IN LINEAR REGRESSION MODELS FOR LONGITUDINAL CONTAMINATED DATA
Institute of Scientific and Technical Information of China (English)
QianWeimin; LiYumei
2005-01-01
The parameter estimation and the coefficient of contamination for the regression models with repeated measures are studied when its response variables are contaminated by another random variable sequence. Under the suitable conditions it is proved that the estimators which are established in the paper are strongly consistent estimators.
Simultaneous estimation of parameters in the bivariate Emax model.
Magnusdottir, Bergrun T; Nyquist, Hans
2015-12-10
In this paper, we explore inference in multi-response, nonlinear models. By multi-response, we mean models with m > 1 response variables and accordingly m relations. Each parameter/explanatory variable may appear in one or more of the relations. We study a system estimation approach for simultaneous computation and inference of the model and (co)variance parameters. For illustration, we fit a bivariate Emax model to diabetes dose-response data. Further, the bivariate Emax model is used in a simulation study that compares the system estimation approach to equation-by-equation estimation. We conclude that overall, the system estimation approach performs better for the bivariate Emax model when there are dependencies among relations. The stronger the dependencies, the more we gain in precision by using system estimation rather than equation-by-equation estimation.
Parameter Estimation for the Thurstone Case III Model.
Mackay, David B.; Chaiy, Seoil
1982-01-01
The ability of three estimation criteria to recover parameters of the Thurstone Case V and Case III models from comparative judgment data was investigated via Monte Carlo techniques. Significant differences in recovery are shown to exist. (Author/JKS)
Parameters Estimation of Geographically Weighted Ordinal Logistic Regression (GWOLR) Model
Zuhdi, Shaifudin; Retno Sari Saputro, Dewi; Widyaningsih, Purnami
2017-06-01
A regression model is the representation of relationship between independent variable and dependent variable. The dependent variable has categories used in the logistic regression model to calculate odds on. The logistic regression model for dependent variable has levels in the logistics regression model is ordinal. GWOLR model is an ordinal logistic regression model influenced the geographical location of the observation site. Parameters estimation in the model needed to determine the value of a population based on sample. The purpose of this research is to parameters estimation of GWOLR model using R software. Parameter estimation uses the data amount of dengue fever patients in Semarang City. Observation units used are 144 villages in Semarang City. The results of research get GWOLR model locally for each village and to know probability of number dengue fever patient categories.
A new estimate of the parameters in linear mixed models
Institute of Scientific and Technical Information of China (English)
王松桂; 尹素菊
2002-01-01
In linear mixed models, there are two kinds of unknown parameters: one is the fixed effect, theother is the variance component. In this paper, new estimates of these parameters, called the spectral decom-position estimates, are proposed, Some important statistical properties of the new estimates are established,in particular the linearity of the estimates of the fixed effects with many statistical optimalities. A new methodis applied to two important models which are used in economics, finance, and mechanical fields. All estimatesobtained have good statistical and practical meaning.
Tube-Load Model Parameter Estimation for Monitoring Arterial Hemodynamics
Directory of Open Access Journals (Sweden)
Guanqun eZhang
2011-11-01
Full Text Available A useful model of the arterial system is the uniform, lossless tube with parametric load. This tube-load model is able to account for wave propagation and reflection (unlike lumped-parameter models such as the Windkessel while being defined by only a few parameters (unlike comprehensive distributed-parameter models. As a result, the parameters may be readily estimated by accurate fitting of the model to available arterial pressure and flow waveforms so as to permit improved monitoring of arterial hemodynamics. In this paper, we review tube-load model parameter estimation techniques that have appeared in the literature for monitoring wave reflection, large artery compliance, pulse transit time, and central aortic pressure. We begin by motivating the use of the tube-load model for parameter estimation. We then describe the tube-load model, its assumptions and validity, and approaches for estimating its parameters. We next summarize the various techniques and their experimental results while highlighting their advantages over conventional techniques. We conclude the review by suggesting future research directions and describing potential applications.
Tube-Load Model Parameter Estimation for Monitoring Arterial Hemodynamics
Zhang, Guanqun; Hahn, Jin-Oh; Mukkamala, Ramakrishna
2011-01-01
A useful model of the arterial system is the uniform, lossless tube with parametric load. This tube-load model is able to account for wave propagation and reflection (unlike lumped-parameter models such as the Windkessel) while being defined by only a few parameters (unlike comprehensive distributed-parameter models). As a result, the parameters may be readily estimated by accurate fitting of the model to available arterial pressure and flow waveforms so as to permit improved monitoring of arterial hemodynamics. In this paper, we review tube-load model parameter estimation techniques that have appeared in the literature for monitoring wave reflection, large artery compliance, pulse transit time, and central aortic pressure. We begin by motivating the use of the tube-load model for parameter estimation. We then describe the tube-load model, its assumptions and validity, and approaches for estimating its parameters. We next summarize the various techniques and their experimental results while highlighting their advantages over conventional techniques. We conclude the review by suggesting future research directions and describing potential applications. PMID:22053157
Estimation of the input parameters in the Feller neuronal model
Ditlevsen, Susanne; Lansky, Petr
2006-06-01
The stochastic Feller neuronal model is studied, and estimators of the model input parameters, depending on the firing regime of the process, are derived. Closed expressions for the first two moments of functionals of the first-passage time (FTP) through a constant boundary in the suprathreshold regime are derived, which are used to calculate moment estimators. In the subthreshold regime, the exponentiality of the FTP is utilized to characterize the input parameters. The methods are illustrated on simulated data. Finally, approximations of the first-passage-time moments are suggested, and biological interpretations and comparisons of the parameters in the Feller and the Ornstein-Uhlenbeck models are discussed.
Bayesian estimation of parameters in a regional hydrological model
Directory of Open Access Journals (Sweden)
K. Engeland
2002-01-01
Full Text Available This study evaluates the applicability of the distributed, process-oriented Ecomag model for prediction of daily streamflow in ungauged basins. The Ecomag model is applied as a regional model to nine catchments in the NOPEX area, using Bayesian statistics to estimate the posterior distribution of the model parameters conditioned on the observed streamflow. The distribution is calculated by Markov Chain Monte Carlo (MCMC analysis. The Bayesian method requires formulation of a likelihood function for the parameters and three alternative formulations are used. The first is a subjectively chosen objective function that describes the goodness of fit between the simulated and observed streamflow, as defined in the GLUE framework. The second and third formulations are more statistically correct likelihood models that describe the simulation errors. The full statistical likelihood model describes the simulation errors as an AR(1 process, whereas the simple model excludes the auto-regressive part. The statistical parameters depend on the catchments and the hydrological processes and the statistical and the hydrological parameters are estimated simultaneously. The results show that the simple likelihood model gives the most robust parameter estimates. The simulation error may be explained to a large extent by the catchment characteristics and climatic conditions, so it is possible to transfer knowledge about them to ungauged catchments. The statistical models for the simulation errors indicate that structural errors in the model are more important than parameter uncertainties. Keywords: regional hydrological model, model uncertainty, Bayesian analysis, Markov Chain Monte Carlo analysis
Dynamic Load Model using PSO-Based Parameter Estimation
Taoka, Hisao; Matsuki, Junya; Tomoda, Michiya; Hayashi, Yasuhiro; Yamagishi, Yoshio; Kanao, Norikazu
This paper presents a new method for estimating unknown parameters of dynamic load model as a parallel composite of a constant impedance load and an induction motor behind a series constant reactance. An adequate dynamic load model is essential for evaluating power system stability, and this model can represent the behavior of actual load by using appropriate parameters. However, the problem of this model is that a lot of parameters are necessary and it is not easy to estimate a lot of unknown parameters. We propose an estimating method based on Particle Swarm Optimization (PSO) which is a non-linear optimization method by using the data of voltage, active power and reactive power measured at voltage sag.
Parameter estimation in stochastic rainfall-runoff models
DEFF Research Database (Denmark)
Jonsdottir, Harpa; Madsen, Henrik; Palsson, Olafur Petur
2006-01-01
the parameters, including the noise terms. The parameter estimation method is a maximum likelihood method (ML) where the likelihood function is evaluated using a Kalman filter technique. The ML method estimates the parameters in a prediction error settings, i.e. the sum of squared prediction error is minimized....... For a comparison the parameters are also estimated by an output error method, where the sum of squared simulation error is minimized. The former methodology is optimal for short-term prediction whereas the latter is optimal for simulations. Hence, depending on the purpose it is possible to select whether...... the parameter values are optimal for simulation or prediction. The data originates from Iceland and the model is designed for Icelandic conditions, including a snow routine for mountainous areas. The model demands only two input data series, precipitation and temperature and one output data series...
Parameter Estimation for a Computable General Equilibrium Model
DEFF Research Database (Denmark)
Arndt, Channing; Robinson, Sherman; Tarp, Finn
. Second, it permits incorporation of prior information on parameter values. Third, it can be applied in the absence of copious data. Finally, it supplies measures of the capacity of the model to reproduce the historical record and the statistical significance of parameter estimates. The method is applied...
Parameter Estimation for a Computable General Equilibrium Model
DEFF Research Database (Denmark)
Arndt, Channing; Robinson, Sherman; Tarp, Finn
2002-01-01
We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of non-linear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...
Parameter Estimation for a Computable General Equilibrium Model
DEFF Research Database (Denmark)
Arndt, Channing; Robinson, Sherman; Tarp, Finn
We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of nonlinear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...
Parameter estimation and investigation of a bolted joint model
Shiryayev, O. V.; Page, S. M.; Pettit, C. L.; Slater, J. C.
2007-11-01
Mechanical joints are a primary source of variability in the dynamics of built-up structures. Physical phenomena in the joint are quite complex and therefore too impractical to model at the micro-scale. This motivates the development of lumped parameter joint models with discrete interfaces so that they can be easily implemented in finite element codes. Among the most important considerations in choosing a model for dynamically excited systems is its ability to model energy dissipation. This translates into the need for accurate and reliable methods to measure model parameters and estimate their inherent variability from experiments. The adjusted Iwan model was identified as a promising candidate for representing joint dynamics. Recent research focused on this model has exclusively employed impulse excitation in conjunction with neural networks to identify the model parameters. This paper presents an investigation of an alternative parameter estimation approach for the adjusted Iwan model, which employs data from oscillatory forcing. This approach is shown to produce parameter estimates with precision similar to the impulse excitation method for a range of model parameters.
Parameter Estimates in Differential Equation Models for Population Growth
Winkel, Brian J.
2011-01-01
We estimate the parameters present in several differential equation models of population growth, specifically logistic growth models and two-species competition models. We discuss student-evolved strategies and offer "Mathematica" code for a gradient search approach. We use historical (1930s) data from microbial studies of the Russian biologist,…
Bayesian parameter estimation for nonlinear modelling of biological pathways
Directory of Open Access Journals (Sweden)
Ghasemi Omid
2011-12-01
Full Text Available Abstract Background The availability of temporal measurements on biological experiments has significantly promoted research areas in systems biology. To gain insight into the interaction and regulation of biological systems, mathematical frameworks such as ordinary differential equations have been widely applied to model biological pathways and interpret the temporal data. Hill equations are the preferred formats to represent the reaction rate in differential equation frameworks, due to their simple structures and their capabilities for easy fitting to saturated experimental measurements. However, Hill equations are highly nonlinearly parameterized functions, and parameters in these functions cannot be measured easily. Additionally, because of its high nonlinearity, adaptive parameter estimation algorithms developed for linear parameterized differential equations cannot be applied. Therefore, parameter estimation in nonlinearly parameterized differential equation models for biological pathways is both challenging and rewarding. In this study, we propose a Bayesian parameter estimation algorithm to estimate parameters in nonlinear mathematical models for biological pathways using time series data. Results We used the Runge-Kutta method to transform differential equations to difference equations assuming a known structure of the differential equations. This transformation allowed us to generate predictions dependent on previous states and to apply a Bayesian approach, namely, the Markov chain Monte Carlo (MCMC method. We applied this approach to the biological pathways involved in the left ventricle (LV response to myocardial infarction (MI and verified our algorithm by estimating two parameters in a Hill equation embedded in the nonlinear model. We further evaluated our estimation performance with different parameter settings and signal to noise ratios. Our results demonstrated the effectiveness of the algorithm for both linearly and nonlinearly
Towards predictive food process models: A protocol for parameter estimation.
Vilas, Carlos; Arias-Méndez, Ana; Garcia, Miriam R; Alonso, Antonio A; Balsa-Canto, E
2016-05-31
Mathematical models, in particular, physics-based models, are essential tools to food product and process design, optimization and control. The success of mathematical models relies on their predictive capabilities. However, describing physical, chemical and biological changes in food processing requires the values of some, typically unknown, parameters. Therefore, parameter estimation from experimental data is critical to achieving desired model predictive properties. This work takes a new look into the parameter estimation (or identification) problem in food process modeling. First, we examine common pitfalls such as lack of identifiability and multimodality. Second, we present the theoretical background of a parameter identification protocol intended to deal with those challenges. And, to finish, we illustrate the performance of the proposed protocol with an example related to the thermal processing of packaged foods.
Parameter estimation and model selection in computational biology.
Directory of Open Access Journals (Sweden)
Gabriele Lillacci
2010-03-01
Full Text Available A central challenge in computational modeling of biological systems is the determination of the model parameters. Typically, only a fraction of the parameters (such as kinetic rate constants are experimentally measured, while the rest are often fitted. The fitting process is usually based on experimental time course measurements of observables, which are used to assign parameter values that minimize some measure of the error between these measurements and the corresponding model prediction. The measurements, which can come from immunoblotting assays, fluorescent markers, etc., tend to be very noisy and taken at a limited number of time points. In this work we present a new approach to the problem of parameter selection of biological models. We show how one can use a dynamic recursive estimator, known as extended Kalman filter, to arrive at estimates of the model parameters. The proposed method follows. First, we use a variation of the Kalman filter that is particularly well suited to biological applications to obtain a first guess for the unknown parameters. Secondly, we employ an a posteriori identifiability test to check the reliability of the estimates. Finally, we solve an optimization problem to refine the first guess in case it should not be accurate enough. The final estimates are guaranteed to be statistically consistent with the measurements. Furthermore, we show how the same tools can be used to discriminate among alternate models of the same biological process. We demonstrate these ideas by applying our methods to two examples, namely a model of the heat shock response in E. coli, and a model of a synthetic gene regulation system. The methods presented are quite general and may be applied to a wide class of biological systems where noisy measurements are used for parameter estimation or model selection.
Parameter Estimation of the Extended Vasiček Model
Rujivan, Sanae
2010-01-01
In this paper, an estimate of the drift and diffusion parameters of the extended Vasiček model is presented. The estimate is based on the method of maximum likelihood. We derive a closed-form expansion for the transition (probability) density of the extended Vasiček process and use the expansion to construct an approximate log-likelihood function of a discretely sampled data of the process. Approximate maximum likelihood estimators (AMLEs) of the parameters are obtained by maximizing the appr...
Adaptive Unified Biased Estimators of Parameters in Linear Model
Institute of Scientific and Technical Information of China (English)
Hu Yang; Li-xing Zhu
2004-01-01
To tackle multi collinearity or ill-conditioned design matrices in linear models,adaptive biased estimators such as the time-honored Stein estimator,the ridge and the principal component estimators have been studied intensively.To study when a biased estimator uniformly outperforms the least squares estimator,some suficient conditions are proposed in the literature.In this paper,we propose a unified framework to formulate a class of adaptive biased estimators.This class includes all existing biased estimators and some new ones.A suficient condition for outperforming the least squares estimator is proposed.In terms of selecting parameters in the condition,we can obtain all double-type conditions in the literature.
Parameter Estimation for Single Diode Models of Photovoltaic Modules
Energy Technology Data Exchange (ETDEWEB)
Hansen, Clifford [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Photovoltaic and Distributed Systems Integration Dept.
2015-03-01
Many popular models for photovoltaic system performance employ a single diode model to compute the I - V curve for a module or string of modules at given irradiance and temperature conditions. A single diode model requires a number of parameters to be estimated from measured I - V curves. Many available parameter estimation methods use only short circuit, o pen circuit and maximum power points for a single I - V curve at standard test conditions together with temperature coefficients determined separately for individual cells. In contrast, module testing frequently records I - V curves over a wide range of irradi ance and temperature conditions which, when available , should also be used to parameterize the performance model. We present a parameter estimation method that makes use of a fu ll range of available I - V curves. We verify the accuracy of the method by recov ering known parameter values from simulated I - V curves . We validate the method by estimating model parameters for a module using outdoor test data and predicting the outdoor performance of the module.
Synchronous Generator Model Parameter Estimation Based on Noisy Dynamic Waveforms
Berhausen, Sebastian; Paszek, Stefan
2016-01-01
In recent years, there have occurred system failures in many power systems all over the world. They have resulted in a lack of power supply to a large number of recipients. To minimize the risk of occurrence of power failures, it is necessary to perform multivariate investigations, including simulations, of power system operating conditions. To conduct reliable simulations, the current base of parameters of the models of generating units, containing the models of synchronous generators, is necessary. In the paper, there is presented a method for parameter estimation of a synchronous generator nonlinear model based on the analysis of selected transient waveforms caused by introducing a disturbance (in the form of a pseudorandom signal) in the generator voltage regulation channel. The parameter estimation was performed by minimizing the objective function defined as a mean square error for deviations between the measurement waveforms and the waveforms calculated based on the generator mathematical model. A hybrid algorithm was used for the minimization of the objective function. In the paper, there is described a filter system used for filtering the noisy measurement waveforms. The calculation results of the model of a 44 kW synchronous generator installed on a laboratory stand of the Institute of Electrical Engineering and Computer Science of the Silesian University of Technology are also given. The presented estimation method can be successfully applied to parameter estimation of different models of high-power synchronous generators operating in a power system.
Combined Estimation of Hydrogeologic Conceptual Model and Parameter Uncertainty
Energy Technology Data Exchange (ETDEWEB)
Meyer, Philip D.; Ye, Ming; Neuman, Shlomo P.; Cantrell, Kirk J.
2004-03-01
The objective of the research described in this report is the development and application of a methodology for comprehensively assessing the hydrogeologic uncertainties involved in dose assessment, including uncertainties associated with conceptual models, parameters, and scenarios. This report describes and applies a statistical method to quantitatively estimate the combined uncertainty in model predictions arising from conceptual model and parameter uncertainties. The method relies on model averaging to combine the predictions of a set of alternative models. Implementation is driven by the available data. When there is minimal site-specific data the method can be carried out with prior parameter estimates based on generic data and subjective prior model probabilities. For sites with observations of system behavior (and optionally data characterizing model parameters), the method uses model calibration to update the prior parameter estimates and model probabilities based on the correspondence between model predictions and site observations. The set of model alternatives can contain both simplified and complex models, with the requirement that all models be based on the same set of data. The method was applied to the geostatistical modeling of air permeability at a fractured rock site. Seven alternative variogram models of log air permeability were considered to represent data from single-hole pneumatic injection tests in six boreholes at the site. Unbiased maximum likelihood estimates of variogram and drift parameters were obtained for each model. Standard information criteria provided an ambiguous ranking of the models, which would not justify selecting one of them and discarding all others as is commonly done in practice. Instead, some of the models were eliminated based on their negligibly small updated probabilities and the rest were used to project the measured log permeabilities by kriging onto a rock volume containing the six boreholes. These four
A software for parameter estimation in dynamic models
Directory of Open Access Journals (Sweden)
M. Yuceer
2008-12-01
Full Text Available A common problem in dynamic systems is to determine parameters in an equation used to represent experimental data. The goal is to determine the values of model parameters that provide the best fit to measured data, generally based on some type of least squares or maximum likelihood criterion. In the most general case, this requires the solution of a nonlinear and frequently non-convex optimization problem. Some of the available software lack in generality, while others do not provide ease of use. A user-interactive parameter estimation software was needed for identifying kinetic parameters. In this work we developed an integration based optimization approach to provide a solution to such problems. For easy implementation of the technique, a parameter estimation software (PARES has been developed in MATLAB environment. When tested with extensive example problems from literature, the suggested approach is proven to provide good agreement between predicted and observed data within relatively less computing time and iterations.
Parameter Estimation of Photovoltaic Models via Cuckoo Search
Directory of Open Access Journals (Sweden)
Jieming Ma
2013-01-01
Full Text Available Since conventional methods are incapable of estimating the parameters of Photovoltaic (PV models with high accuracy, bioinspired algorithms have attracted significant attention in the last decade. Cuckoo Search (CS is invented based on the inspiration of brood parasitic behavior of some cuckoo species in combination with the Lévy flight behavior. In this paper, a CS-based parameter estimation method is proposed to extract the parameters of single-diode models for commercial PV generators. Simulation results and experimental data show that the CS algorithm is capable of obtaining all the parameters with extremely high accuracy, depicted by a low Root-Mean-Squared-Error (RMSE value. The proposed method outperforms other algorithms applied in this study.
A Bayesian framework for parameter estimation in dynamical models.
Directory of Open Access Journals (Sweden)
Flávio Codeço Coelho
Full Text Available Mathematical models in biology are powerful tools for the study and exploration of complex dynamics. Nevertheless, bringing theoretical results to an agreement with experimental observations involves acknowledging a great deal of uncertainty intrinsic to our theoretical representation of a real system. Proper handling of such uncertainties is key to the successful usage of models to predict experimental or field observations. This problem has been addressed over the years by many tools for model calibration and parameter estimation. In this article we present a general framework for uncertainty analysis and parameter estimation that is designed to handle uncertainties associated with the modeling of dynamic biological systems while remaining agnostic as to the type of model used. We apply the framework to fit an SIR-like influenza transmission model to 7 years of incidence data in three European countries: Belgium, the Netherlands and Portugal.
Estimation of the parameters of ETAS models by Simulated Annealing
Lombardi, Anna Maria
2015-01-01
This paper proposes a new algorithm to estimate the maximum likelihood parameters of an Epidemic Type Aftershock Sequences (ETAS) model. It is based on Simulated Annealing, a versatile method that solves problems of global optimization and ensures convergence to a global optimum. The procedure is tested on both simulated and real catalogs. The main conclusion is that the method performs poorly as the size of the catalog decreases because the effect of the correlation of the ETAS parameters is...
CADLIVE optimizer: web-based parameter estimation for dynamic models
Directory of Open Access Journals (Sweden)
Inoue Kentaro
2012-08-01
Full Text Available Abstract Computer simulation has been an important technique to capture the dynamics of biochemical networks. In most networks, however, few kinetic parameters have been measured in vivo because of experimental complexity. We develop a kinetic parameter estimation system, named the CADLIVE Optimizer, which comprises genetic algorithms-based solvers with a graphical user interface. This optimizer is integrated into the CADLIVE Dynamic Simulator to attain efficient simulation for dynamic models.
Parameter Estimation for Groundwater Models under Uncertain Irrigation Data.
Demissie, Yonas; Valocchi, Albert; Cai, Ximing; Brozovic, Nicholas; Senay, Gabriel; Gebremichael, Mekonnen
2015-01-01
The success of modeling groundwater is strongly influenced by the accuracy of the model parameters that are used to characterize the subsurface system. However, the presence of uncertainty and possibly bias in groundwater model source/sink terms may lead to biased estimates of model parameters and model predictions when the standard regression-based inverse modeling techniques are used. This study first quantifies the levels of bias in groundwater model parameters and predictions due to the presence of errors in irrigation data. Then, a new inverse modeling technique called input uncertainty weighted least-squares (IUWLS) is presented for unbiased estimation of the parameters when pumping and other source/sink data are uncertain. The approach uses the concept of generalized least-squares method with the weight of the objective function depending on the level of pumping uncertainty and iteratively adjusted during the parameter optimization process. We have conducted both analytical and numerical experiments, using irrigation pumping data from the Republican River Basin in Nebraska, to evaluate the performance of ordinary least-squares (OLS) and IUWLS calibration methods under different levels of uncertainty of irrigation data and calibration conditions. The result from the OLS method shows the presence of statistically significant (p irrigation pumping uncertainties during the calibration procedures, the proposed IUWLS is able to minimize the bias effectively without adding significant computational burden to the calibration processes.
Parameter estimation for groundwater models under uncertain irrigation data
Demissie, Yonas; Valocchi, Albert J.; Cai, Ximing; Brozovic, Nicholas; Senay, Gabriel; Gebremichael, Mekonnen
2015-01-01
The success of modeling groundwater is strongly influenced by the accuracy of the model parameters that are used to characterize the subsurface system. However, the presence of uncertainty and possibly bias in groundwater model source/sink terms may lead to biased estimates of model parameters and model predictions when the standard regression-based inverse modeling techniques are used. This study first quantifies the levels of bias in groundwater model parameters and predictions due to the presence of errors in irrigation data. Then, a new inverse modeling technique called input uncertainty weighted least-squares (IUWLS) is presented for unbiased estimation of the parameters when pumping and other source/sink data are uncertain. The approach uses the concept of generalized least-squares method with the weight of the objective function depending on the level of pumping uncertainty and iteratively adjusted during the parameter optimization process. We have conducted both analytical and numerical experiments, using irrigation pumping data from the Republican River Basin in Nebraska, to evaluate the performance of ordinary least-squares (OLS) and IUWLS calibration methods under different levels of uncertainty of irrigation data and calibration conditions. The result from the OLS method shows the presence of statistically significant (p irrigation pumping uncertainties during the calibration procedures, the proposed IUWLS is able to minimize the bias effectively without adding significant computational burden to the calibration processes.
Parameter Estimation of the Extended Vasiček Model
Directory of Open Access Journals (Sweden)
Sanae RUJIVAN
2010-01-01
Full Text Available In this paper, an estimate of the drift and diffusion parameters of the extended Vasiček model is presented. The estimate is based on the method of maximum likelihood. We derive a closed-form expansion for the transition (probability density of the extended Vasiček process and use the expansion to construct an approximate log-likelihood function of a discretely sampled data of the process. Approximate maximum likelihood estimators (AMLEs of the parameters are obtained by maximizing the approximate log-likelihood function. The convergence of the AMLEs to the true maximum likelihood estimators is obtained by increasing the number of terms in the expansions with a small time step size.
Parameter Estimation in Stochastic Grey-Box Models
DEFF Research Database (Denmark)
Kristensen, Niels Rode; Madsen, Henrik; Jørgensen, Sten Bay
2004-01-01
An efficient and flexible parameter estimation scheme for grey-box models in the sense of discretely, partially observed Ito stochastic differential equations with measurement noise is presented along with a corresponding software implementation. The estimation scheme is based on the extended...... Kalman filter and features maximum likelihood as well as maximum a posteriori estimation on multiple independent data sets, including irregularly sampled data sets and data sets with occasional outliers and missing observations. The software implementation is compared to an existing software tool...
Estimation of growth parameters using a nonlinear mixed Gompertz model.
Wang, Z; Zuidhof, M J
2004-06-01
In order to maximize the utility of simulation models for decision making, accurate estimation of growth parameters and associated variances is crucial. A mixed Gompertz growth model was used to account for between-bird variation and heterogeneous variance. The mixed model had several advantages over the fixed effects model. The mixed model partitioned BW variation into between- and within-bird variation, and the covariance structure assumed with the random effect accounted for part of the BW correlation across ages in the same individual. The amount of residual variance decreased by over 55% with the mixed model. The mixed model reduced estimation biases that resulted from selective sampling. For analysis of longitudinal growth data, the mixed effects growth model is recommended.
Comparison of Parameter Estimation Methods for Transformer Weibull Lifetime Modelling
Institute of Scientific and Technical Information of China (English)
ZHOU Dan; LI Chengrong; WANG Zhongdong
2013-01-01
Two-parameter Weibull distribution is the most widely adopted lifetime model for power transformers.An appropriate parameter estimation method is essential to guarantee the accuracy of a derived Weibull lifetime model.Six popular parameter estimation methods (i.e.the maximum likelihood estimation method,two median rank regression methods including the one regressing X on Y and the other one regressing Y on X,the Kaplan-Meier method,the method based on cumulative hazard plot,and the Li's method) are reviewed and compared in order to find the optimal one that suits transformer's Weibull lifetime modelling.The comparison took several different scenarios into consideration:10 000 sets of lifetime data,each of which had a sampling size of 40 ～ 1 000 and a censoring rate of 90％,were obtained by Monte-Carlo simulations for each scienario.Scale and shape parameters of Weibull distribution estimated by the six methods,as well as their mean value,median value and 90％ confidence band are obtained.The cross comparison of these results reveals that,among the six methods,the maximum likelihood method is the best one,since it could provide the most accurate Weibull parameters,i.e.parameters having the smallest bias in both mean and median values,as well as the shortest length of the 90％ confidence band.The maximum likelihood method is therefore recommended to be used over the other methods in transformer Weibull lifetime modelling.
Estimation of the parameters of ETAS models by Simulated Annealing
Lombardi, Anna Maria
2015-02-01
This paper proposes a new algorithm to estimate the maximum likelihood parameters of an Epidemic Type Aftershock Sequences (ETAS) model. It is based on Simulated Annealing, a versatile method that solves problems of global optimization and ensures convergence to a global optimum. The procedure is tested on both simulated and real catalogs. The main conclusion is that the method performs poorly as the size of the catalog decreases because the effect of the correlation of the ETAS parameters is more significant. These results give new insights into the ETAS model and the efficiency of the maximum-likelihood method within this context.
J-A Hysteresis Model Parameters Estimation using GA
Directory of Open Access Journals (Sweden)
Bogomir Zidaric
2005-01-01
Full Text Available This paper presents the Jiles and Atherton (J-A hysteresis model parameter estimation for soft magnetic composite (SMC material. The calculation of Jiles and Atherton hysteresis model parameters is based on experimental data and genetic algorithms (GA. Genetic algorithms operate in a given area of possible solutions. Finding the best solution of a problem in wide area of possible solutions is uncertain. A new approach in use of genetic algorithms is proposed to overcome this uncertainty. The basis of this approach is in genetic algorithm built in another genetic algorithm.
Propagation channel characterization, parameter estimation, and modeling for wireless communications
Yin, Xuefeng
2016-01-01
Thoroughly covering channel characteristics and parameters, this book provides the knowledge needed to design various wireless systems, such as cellular communication systems, RFID and ad hoc wireless communication systems. It gives a detailed introduction to aspects of channels before presenting the novel estimation and modelling techniques which can be used to achieve accurate models. To systematically guide readers through the topic, the book is organised in three distinct parts. The first part covers the fundamentals of the characterization of propagation channels, including the conventional single-input single-output (SISO) propagation channel characterization as well as its extension to multiple-input multiple-output (MIMO) cases. Part two focuses on channel measurements and channel data post-processing. Wideband channel measurements are introduced, including the equipment, technology and advantages and disadvantages of different data acquisition schemes. The channel parameter estimation methods are ...
Estimating model parameters in nonautonomous chaotic systems using synchronization
Yang, Xiaoli; Xu, Wei; Sun, Zhongkui
2007-05-01
In this Letter, a technique is addressed for estimating unknown model parameters of multivariate, in particular, nonautonomous chaotic systems from time series of state variables. This technique uses an adaptive strategy for tracking unknown parameters in addition to a linear feedback coupling for synchronizing systems, and then some general conditions, by means of the periodic version of the LaSalle invariance principle for differential equations, are analytically derived to ensure precise evaluation of unknown parameters and identical synchronization between the concerned experimental system and its corresponding receiver one. Exemplifies are presented by employing a parametrically excited 4D new oscillator and an additionally excited Ueda oscillator. The results of computer simulations reveal that the technique not only can quickly track the desired parameter values but also can rapidly respond to changes in operating parameters. In addition, the technique can be favorably robust against the effect of noise when the experimental system is corrupted by bounded disturbance and the normalized absolute error of parameter estimation grows almost linearly with the cutoff value of noise strength in simulation.
Estimating model parameters in nonautonomous chaotic systems using synchronization
Energy Technology Data Exchange (ETDEWEB)
Yang, Xiaoli [Department of Applied Mathematics, Northwestern Polytechnical University, Xi' an 710072 (China)]. E-mail: yangxl205@mail.nwpu.edu.cn; Xu, Wei [Department of Applied Mathematics, Northwestern Polytechnical University, Xi' an 710072 (China); Sun, Zhongkui [Department of Applied Mathematics, Northwestern Polytechnical University, Xi' an 710072 (China)
2007-05-07
In this Letter, a technique is addressed for estimating unknown model parameters of multivariate, in particular, nonautonomous chaotic systems from time series of state variables. This technique uses an adaptive strategy for tracking unknown parameters in addition to a linear feedback coupling for synchronizing systems, and then some general conditions, by means of the periodic version of the LaSalle invariance principle for differential equations, are analytically derived to ensure precise evaluation of unknown parameters and identical synchronization between the concerned experimental system and its corresponding receiver one. Exemplifies are presented by employing a parametrically excited 4D new oscillator and an additionally excited Ueda oscillator. The results of computer simulations reveal that the technique not only can quickly track the desired parameter values but also can rapidly respond to changes in operating parameters. In addition, the technique can be favorably robust against the effect of noise when the experimental system is corrupted by bounded disturbance and the normalized absolute error of parameter estimation grows almost linearly with the cutoff value of noise strength in simulation.
Singularity of Some Software Reliability Models and Parameter Estimation Method
Institute of Scientific and Technical Information of China (English)
无
2000-01-01
According to the principle, “The failure data is the basis of software reliability analysis”, we built a software reliability expert system (SRES) by adopting the artificial intelligence technology. By reasoning out the conclusion from the fitting results of failure data of a software project, the SRES can recommend users “the most suitable model” as a software reliability measurement model. We believe that the SRES can overcome the inconsistency in applications of software reliability models well. We report investigation results of singularity and parameter estimation methods of experimental models in SRES.
Directory of Open Access Journals (Sweden)
Jonathan R Karr
2015-05-01
Full Text Available Whole-cell models that explicitly represent all cellular components at the molecular level have the potential to predict phenotype from genotype. However, even for simple bacteria, whole-cell models will contain thousands of parameters, many of which are poorly characterized or unknown. New algorithms are needed to estimate these parameters and enable researchers to build increasingly comprehensive models. We organized the Dialogue for Reverse Engineering Assessments and Methods (DREAM 8 Whole-Cell Parameter Estimation Challenge to develop new parameter estimation algorithms for whole-cell models. We asked participants to identify a subset of parameters of a whole-cell model given the model's structure and in silico "experimental" data. Here we describe the challenge, the best performing methods, and new insights into the identifiability of whole-cell models. We also describe several valuable lessons we learned toward improving future challenges. Going forward, we believe that collaborative efforts supported by inexpensive cloud computing have the potential to solve whole-cell model parameter estimation.
Accuracy of Parameter Estimation in Gibbs Sampling under the Two-Parameter Logistic Model.
Kim, Seock-Ho; Cohen, Allan S.
The accuracy of Gibbs sampling, a Markov chain Monte Carlo procedure, was considered for estimation of item and ability parameters under the two-parameter logistic model. Memory test data were analyzed to illustrate the Gibbs sampling procedure. Simulated data sets were analyzed using Gibbs sampling and the marginal Bayesian method. The marginal…
Parameter estimation in a spatial unit root autoregressive model
Baran, Sándor
2011-01-01
Spatial autoregressive model $X_{k,\\ell}=\\alpha X_{k-1,\\ell}+\\beta X_{k,\\ell-1}+\\gamma X_{k-1,\\ell-1}+\\epsilon_{k,\\ell}$ is investigated in the unit root case, that is when the parameters are on the boundary of the domain of stability that forms a tetrahedron with vertices $(1,1,-1), \\ (1,-1,1),\\ (-1,1,1)$ and $(-1,-1,-1)$. It is shown that the limiting distribution of the least squares estimator of the parameters is normal and the rate of convergence is $n$ when the parameters are in the faces or on the edges of the tetrahedron, while on the vertices the rate is $n^{3/2}$.
Parameter and uncertainty estimation for mechanistic, spatially explicit epidemiological models
Finger, Flavio; Schaefli, Bettina; Bertuzzo, Enrico; Mari, Lorenzo; Rinaldo, Andrea
2014-05-01
Epidemiological models can be a crucially important tool for decision-making during disease outbreaks. The range of possible applications spans from real-time forecasting and allocation of health-care resources to testing alternative intervention mechanisms such as vaccines, antibiotics or the improvement of sanitary conditions. Our spatially explicit, mechanistic models for cholera epidemics have been successfully applied to several epidemics including, the one that struck Haiti in late 2010 and is still ongoing. Calibration and parameter estimation of such models represents a major challenge because of properties unusual in traditional geoscientific domains such as hydrology. Firstly, the epidemiological data available might be subject to high uncertainties due to error-prone diagnosis as well as manual (and possibly incomplete) data collection. Secondly, long-term time-series of epidemiological data are often unavailable. Finally, the spatially explicit character of the models requires the comparison of several time-series of model outputs with their real-world counterparts, which calls for an appropriate weighting scheme. It follows that the usual assumption of a homoscedastic Gaussian error distribution, used in combination with classical calibration techniques based on Markov chain Monte Carlo algorithms, is likely to be violated, whereas the construction of an appropriate formal likelihood function seems close to impossible. Alternative calibration methods, which allow for accurate estimation of total model uncertainty, particularly regarding the envisaged use of the models for decision-making, are thus needed. Here we present the most recent developments regarding methods for parameter and uncertainty estimation to be used with our mechanistic, spatially explicit models for cholera epidemics, based on informal measures of goodness of fit.
Directory of Open Access Journals (Sweden)
Baker Syed
2011-01-01
Full Text Available Abstract In systems biology, experimentally measured parameters are not always available, necessitating the use of computationally based parameter estimation. In order to rely on estimated parameters, it is critical to first determine which parameters can be estimated for a given model and measurement set. This is done with parameter identifiability analysis. A kinetic model of the sucrose accumulation in the sugar cane culm tissue developed by Rohwer et al. was taken as a test case model. What differentiates this approach is the integration of an orthogonal-based local identifiability method into the unscented Kalman filter (UKF, rather than using the more common observability-based method which has inherent limitations. It also introduces a variable step size based on the system uncertainty of the UKF during the sensitivity calculation. This method identified 10 out of 12 parameters as identifiable. These ten parameters were estimated using the UKF, which was run 97 times. Throughout the repetitions the UKF proved to be more consistent than the estimation algorithms used for comparison.
Baker, Syed Murtuza; Poskar, C Hart; Junker, Björn H
2011-10-11
In systems biology, experimentally measured parameters are not always available, necessitating the use of computationally based parameter estimation. In order to rely on estimated parameters, it is critical to first determine which parameters can be estimated for a given model and measurement set. This is done with parameter identifiability analysis. A kinetic model of the sucrose accumulation in the sugar cane culm tissue developed by Rohwer et al. was taken as a test case model. What differentiates this approach is the integration of an orthogonal-based local identifiability method into the unscented Kalman filter (UKF), rather than using the more common observability-based method which has inherent limitations. It also introduces a variable step size based on the system uncertainty of the UKF during the sensitivity calculation. This method identified 10 out of 12 parameters as identifiable. These ten parameters were estimated using the UKF, which was run 97 times. Throughout the repetitions the UKF proved to be more consistent than the estimation algorithms used for comparison.
Accelerated gravitational wave parameter estimation with reduced order modeling.
Canizares, Priscilla; Field, Scott E; Gair, Jonathan; Raymond, Vivien; Smith, Rory; Tiglio, Manuel
2015-02-20
Inferring the astrophysical parameters of coalescing compact binaries is a key science goal of the upcoming advanced LIGO-Virgo gravitational-wave detector network and, more generally, gravitational-wave astronomy. However, current approaches to parameter estimation for these detectors require computationally expensive algorithms. Therefore, there is a pressing need for new, fast, and accurate Bayesian inference techniques. In this Letter, we demonstrate that a reduced order modeling approach enables rapid parameter estimation to be performed. By implementing a reduced order quadrature scheme within the LIGO Algorithm Library, we show that Bayesian inference on the 9-dimensional parameter space of nonspinning binary neutron star inspirals can be sped up by a factor of ∼30 for the early advanced detectors' configurations (with sensitivities down to around 40 Hz) and ∼70 for sensitivities down to around 20 Hz. This speedup will increase to about 150 as the detectors improve their low-frequency limit to 10 Hz, reducing to hours analyses which could otherwise take months to complete. Although these results focus on interferometric gravitational wave detectors, the techniques are broadly applicable to any experiment where fast Bayesian analysis is desirable.
Parameter and State Estimator for State Space Models
Directory of Open Access Journals (Sweden)
Ruifeng Ding
2014-01-01
Full Text Available This paper proposes a parameter and state estimator for canonical state space systems from measured input-output data. The key is to solve the system state from the state equation and to substitute it into the output equation, eliminating the state variables, and the resulting equation contains only the system inputs and outputs, and to derive a least squares parameter identification algorithm. Furthermore, the system states are computed from the estimated parameters and the input-output data. Convergence analysis using the martingale convergence theorem indicates that the parameter estimates converge to their true values. Finally, an illustrative example is provided to show that the proposed algorithm is effective.
Parameter and state estimator for state space models.
Ding, Ruifeng; Zhuang, Linfan
2014-01-01
This paper proposes a parameter and state estimator for canonical state space systems from measured input-output data. The key is to solve the system state from the state equation and to substitute it into the output equation, eliminating the state variables, and the resulting equation contains only the system inputs and outputs, and to derive a least squares parameter identification algorithm. Furthermore, the system states are computed from the estimated parameters and the input-output data. Convergence analysis using the martingale convergence theorem indicates that the parameter estimates converge to their true values. Finally, an illustrative example is provided to show that the proposed algorithm is effective.
Yu, Xiaolin; Zhang, Shaoqing; Lin, Xiaopei; Li, Mingkui
2017-03-01
The uncertainties in values of coupled model parameters are an important source of model bias that causes model climate drift. The values can be calibrated by a parameter estimation procedure that projects observational information onto model parameters. The signal-to-noise ratio of error covariance between the model state and the parameter being estimated directly determines whether the parameter estimation succeeds or not. With a conceptual climate model that couples the stochastic atmosphere and slow-varying ocean, this study examines the sensitivity of state-parameter covariance on the accuracy of estimated model states in different model components of a coupled system. Due to the interaction of multiple timescales, the fast-varying atmosphere with a chaotic nature is the major source of the inaccuracy of estimated state-parameter covariance. Thus, enhancing the estimation accuracy of atmospheric states is very important for the success of coupled model parameter estimation, especially for the parameters in the air-sea interaction processes. The impact of chaotic-to-periodic ratio in state variability on parameter estimation is also discussed. This simple model study provides a guideline when real observations are used to optimize model parameters in a coupled general circulation model for improving climate analysis and predictions.
Dynamic systems models new methods of parameter and state estimation
2016-01-01
This monograph is an exposition of a novel method for solving inverse problems, a method of parameter estimation for time series data collected from simulations of real experiments. These time series might be generated by measuring the dynamics of aircraft in flight, by the function of a hidden Markov model used in bioinformatics or speech recognition or when analyzing the dynamics of asset pricing provided by the nonlinear models of financial mathematics. Dynamic Systems Models demonstrates the use of algorithms based on polynomial approximation which have weaker requirements than already-popular iterative methods. Specifically, they do not require a first approximation of a root vector and they allow non-differentiable elements in the vector functions being approximated. The text covers all the points necessary for the understanding and use of polynomial approximation from the mathematical fundamentals, through algorithm development to the application of the method in, for instance, aeroplane flight dynamic...
Parameter estimation and hypothesis testing in linear models
Koch, Karl-Rudolf
1999-01-01
The necessity to publish the second edition of this book arose when its third German edition had just been published. This second English edition is there fore a translation of the third German edition of Parameter Estimation and Hypothesis Testing in Linear Models, published in 1997. It differs from the first English edition by the addition of a new chapter on robust estimation of parameters and the deletion of the section on discriminant analysis, which has been more completely dealt with by the author in the book Bayesian In ference with Geodetic Applications, Springer-Verlag, Berlin Heidelberg New York, 1990. Smaller additions and deletions have been incorporated, to im prove the text, to point out new developments or to eliminate errors which became apparent. A few examples have been also added. I thank Springer-Verlag for publishing this second edition and for the assistance in checking the translation, although the responsibility of errors remains with the author. I also want to express my thanks...
Parameter estimation for models of ligninolytic and cellulolytic enzyme kinetics
Energy Technology Data Exchange (ETDEWEB)
Wang, Gangsheng [ORNL; Post, Wilfred M [ORNL; Mayes, Melanie [ORNL; Frerichs, Joshua T [ORNL; Jagadamma, Sindhu [ORNL
2012-01-01
While soil enzymes have been explicitly included in the soil organic carbon (SOC) decomposition models, there is a serious lack of suitable data for model parameterization. This study provides well-documented enzymatic parameters for application in enzyme-driven SOC decomposition models from a compilation and analysis of published measurements. In particular, we developed appropriate kinetic parameters for five typical ligninolytic and cellulolytic enzymes ( -glucosidase, cellobiohydrolase, endo-glucanase, peroxidase, and phenol oxidase). The kinetic parameters included the maximum specific enzyme activity (Vmax) and half-saturation constant (Km) in the Michaelis-Menten equation. The activation energy (Ea) and the pH optimum and sensitivity (pHopt and pHsen) were also analyzed. pHsen was estimated by fitting an exponential-quadratic function. The Vmax values, often presented in different units under various measurement conditions, were converted into the same units at a reference temperature (20 C) and pHopt. Major conclusions are: (i) Both Vmax and Km were log-normal distributed, with no significant difference in Vmax exhibited between enzymes originating from bacteria or fungi. (ii) No significant difference in Vmax was found between cellulases and ligninases; however, there was significant difference in Km between them. (iii) Ligninases had higher Ea values and lower pHopt than cellulases; average ratio of pHsen to pHopt ranged 0.3 0.4 for the five enzymes, which means that an increase or decrease of 1.1 1.7 pH units from pHopt would reduce Vmax by 50%. (iv) Our analysis indicated that the Vmax values from lab measurements with purified enzymes were 1 2 orders of magnitude higher than those for use in SOC decomposition models under field conditions.
Parameter estimation for stochastic hybrid model applied to urban traffic flow estimation
2015-01-01
This study proposes a novel data-based approach for estimating the parameters of a stochastic hybrid model describing the traffic flow in an urban traffic network with signalized intersections. The model represents the evolution of the traffic flow rate, measuring the number of vehicles passing a given location per time unit. This traffic flow rate is described using a mode-dependent first-order autoregressive (AR) stochastic process. The parameters of the AR process take different values dep...
Campbell, D A; Chkrebtii, O
2013-12-01
Statistical inference for biochemical models often faces a variety of characteristic challenges. In this paper we examine state and parameter estimation for the JAK-STAT intracellular signalling mechanism, which exemplifies the implementation intricacies common in many biochemical inference problems. We introduce an extension to the Generalized Smoothing approach for estimating delay differential equation models, addressing selection of complexity parameters, choice of the basis system, and appropriate optimization strategies. Motivated by the JAK-STAT system, we further extend the generalized smoothing approach to consider a nonlinear observation process with additional unknown parameters, and highlight how the approach handles unobserved states and unevenly spaced observations. The methodology developed is generally applicable to problems of estimation for differential equation models with delays, unobserved states, nonlinear observation processes, and partially observed histories.
Variational methods to estimate terrestrial ecosystem model parameters
Delahaies, Sylvain; Roulstone, Ian
2016-04-01
Carbon is at the basis of the chemistry of life. Its ubiquity in the Earth system is the result of complex recycling processes. Present in the atmosphere in the form of carbon dioxide it is adsorbed by marine and terrestrial ecosystems and stored within living biomass and decaying organic matter. Then soil chemistry and a non negligible amount of time transform the dead matter into fossil fuels. Throughout this cycle, carbon dioxide is released in the atmosphere through respiration and combustion of fossils fuels. Model-data fusion techniques allow us to combine our understanding of these complex processes with an ever-growing amount of observational data to help improving models and predictions. The data assimilation linked ecosystem carbon (DALEC) model is a simple box model simulating the carbon budget allocation for terrestrial ecosystems. Over the last decade several studies have demonstrated the relative merit of various inverse modelling strategies (MCMC, ENKF, 4DVAR) to estimate model parameters and initial carbon stocks for DALEC and to quantify the uncertainty in the predictions. Despite its simplicity, DALEC represents the basic processes at the heart of more sophisticated models of the carbon cycle. Using adjoint based methods we study inverse problems for DALEC with various data streams (8 days MODIS LAI, monthly MODIS LAI, NEE). The framework of constraint optimization allows us to incorporate ecological common sense into the variational framework. We use resolution matrices to study the nature of the inverse problems and to obtain data importance and information content for the different type of data. We study how varying the time step affect the solutions, and we show how "spin up" naturally improves the conditioning of the inverse problems.
Estimating Cosmological Parameter Covariance
Taylor, Andy
2014-01-01
We investigate the bias and error in estimates of the cosmological parameter covariance matrix, due to sampling or modelling the data covariance matrix, for likelihood width and peak scatter estimators. We show that these estimators do not coincide unless the data covariance is exactly known. For sampled data covariances, with Gaussian distributed data and parameters, the parameter covariance matrix estimated from the width of the likelihood has a Wishart distribution, from which we derive the mean and covariance. This mean is biased and we propose an unbiased estimator of the parameter covariance matrix. Comparing our analytic results to a numerical Wishart sampler of the data covariance matrix we find excellent agreement. An accurate ansatz for the mean parameter covariance for the peak scatter estimator is found, and we fit its covariance to our numerical analysis. The mean is again biased and we propose an unbiased estimator for the peak parameter covariance. For sampled data covariances the width estimat...
Modeling and parameter estimation for hydraulic system of excavator's arm
Institute of Scientific and Technical Information of China (English)
HE Qing-hua; HAO Peng; ZHANG Da-qing
2008-01-01
A retrofitted electro-bydraulic proportional system for hydraulic excavator was introduced firstly. According to the principle and characteristic of load independent flow distribution(LUDV)system, taking boom hydraulic system as an example and ignoring the leakage of hydraulic cylinder and the mass of oil in it,a force equilibrium equation and a continuous equation of hydraulic cylinder were set up.Based On the flow equation of electro-hydraulic proportional valve, the pressure passing through the valve and the difference of pressure were tested and analyzed.The results show that the difference of pressure does not change with load, and it approximates to 2.0 MPa. And then, assume the flow across the valve is directly proportional to spool displacement andis not influenced by load, a simplified model of electro-hydraulic system was put forward. At the same time, by analyzing the structure and load-bearing of boom instrument, and combining moment equivalent equation of manipulator with rotating law, the estimation methods and equations for such parameters as equivalent mass and bearing force of hydraulic cylinder were set up. Finally, the step response of flow of boom cylinder was tested when the electro-hydraulic proportional valve was controlled by the stepcurrent. Based on the experiment curve, the flow gain coefficient of valve is identified as 2.825×10-4m3/(s·A)and the model is verified.
Estimating winter wheat phenological parameters: Implications for crop modeling
Crop parameters, such as the timing of developmental events, are critical for accurate simulation results in crop simulation models, yet uncertainty often exists in determining the parameters. Factors contributing to the uncertainty include: a) sources of variation within a plant (i.e., within diffe...
Retrospective forecast of ETAS model with daily parameters estimate
Falcone, Giuseppe; Murru, Maura; Console, Rodolfo; Marzocchi, Warner; Zhuang, Jiancang
2016-04-01
We present a retrospective ETAS (Epidemic Type of Aftershock Sequence) model based on the daily updating of free parameters during the background, the learning and the test phase of a seismic sequence. The idea was born after the 2011 Tohoku-Oki earthquake. The CSEP (Collaboratory for the Study of Earthquake Predictability) Center in Japan provided an appropriate testing benchmark for the five 1-day submitted models. Of all the models, only one was able to successfully predict the number of events that really happened. This result was verified using both the real time and the revised catalogs. The main cause of the failure was in the underestimation of the forecasted events, due to model parameters maintained fixed during the test. Moreover, the absence in the learning catalog of an event similar to the magnitude of the mainshock (M9.0), which drastically changed the seismicity in the area, made the learning parameters not suitable to describe the real seismicity. As an example of this methodological development we show the evolution of the model parameters during the last two strong seismic sequences in Italy: the 2009 L'Aquila and the 2012 Reggio Emilia episodes. The achievement of the model with daily updated parameters is compared with that of same model where the parameters remain fixed during the test time.
Uncertainty of Modal Parameters Estimated by ARMA Models
DEFF Research Database (Denmark)
Jensen, Jacob Laigaard; Brincker, Rune; Rytter, Anders
1990-01-01
In this paper the uncertainties of identified modal parameters such as eidenfrequencies and damping ratios are assed. From the measured response of dynamic excited structures the modal parameters may be identified and provide important structural knowledge. However the uncertainty of the parameters...... by simulation study of a lightly damped single degree of freedom system. Identification by ARMA models has been choosen as system identification method. It is concluded that both the sampling interval and number of sampled points may play a significant role with respect to the statistical errors. Furthermore...
Mattern, Jann Paul; Edwards, Christopher A.
2017-01-01
Parameter estimation is an important part of numerical modeling and often required when a coupled physical-biogeochemical ocean model is first deployed. However, 3-dimensional ocean model simulations are computationally expensive and models typically contain upwards of 10 parameters suitable for estimation. Hence, manual parameter tuning can be lengthy and cumbersome. Here, we present four easy to implement and flexible parameter estimation techniques and apply them to two 3-dimensional biogeochemical models of different complexities. Based on a Monte Carlo experiment, we first develop a cost function measuring the model-observation misfit based on multiple data types. The parameter estimation techniques are then applied and yield a substantial cost reduction over ∼ 100 simulations. Based on the outcome of multiple replicate experiments, they perform on average better than random, uninformed parameter search but performance declines when more than 40 parameters are estimated together. Our results emphasize the complex cost function structure for biogeochemical parameters and highlight dependencies between different parameters as well as different cost function formulations.
Parameter estimation for the subcritical Heston model based on discrete time observations
2014-01-01
We study asymptotic properties of some (essentially conditional least squares) parameter estimators for the subcritical Heston model based on discrete time observations derived from conditional least squares estimators of some modified parameters.
Uncertainty of Modal Parameters Estimated by ARMA Models
DEFF Research Database (Denmark)
Jensen, Jakob Laigaard; Brincker, Rune; Rytter, Anders
In this paper the uncertainties of identified modal parameters such as eigenfrequencies and damping ratios are assessed. From the measured response of dynamic excited structures the modal parameters may be identified and provide important structural knowledge. However the uncertainty of the param......In this paper the uncertainties of identified modal parameters such as eigenfrequencies and damping ratios are assessed. From the measured response of dynamic excited structures the modal parameters may be identified and provide important structural knowledge. However the uncertainty...... by a simulation study of a lightly damped single degree of freedom system. Identification by ARMA models has been chosen as system identification method. It is concluded that both the sampling interval and number of sampled points may play a significant role with respect to the statistical errors. Furthermore...
Improved parameter estimation for hydrological models using weighted object functions
Stein, A.; Zaadnoordijk, W.J.
1999-01-01
This paper discusses the sensitivity of calibration of hydrological model parameters to different objective functions. Several functions are defined with weights depending upon the hydrological background. These are compared with an objective function based upon kriging. Calibration is applied to pi
Institute of Scientific and Technical Information of China (English)
Youlong XIA; Zong-Liang YANG; Paul L. STOFFA; Mrinal K. SEN
2005-01-01
Most previous land-surface model calibration studies have defined global ranges for their parameters to search for optimal parameter sets. Little work has been conducted to study the impacts of realistic versus global ranges as well as model complexities on the calibration and uncertainty estimates. The primary purpose of this paper is to investigate these impacts by employing Bayesian Stochastic Inversion (BSI)to the Chameleon Surface Model (CHASM). The CHASM was designed to explore the general aspects of land-surface energy balance representation within a common modeling framework that can be run from a simple energy balance formulation to a complex mosaic type structure. The BSI is an uncertainty estimation technique based on Bayes theorem, importance sampling, and very fast simulated annealing.The model forcing data and surface flux data were collected at seven sites representing a wide range of climate and vegetation conditions. For each site, four experiments were performed with simple and complex CHASM formulations as well as realistic and global parameter ranges. Twenty eight experiments were conducted and 50 000 parameter sets were used for each run. The results show that the use of global and realistic ranges gives similar simulations for both modes for most sites, but the global ranges tend to produce some unreasonable optimal parameter values. Comparison of simple and complex modes shows that the simple mode has more parameters with unreasonable optimal values. Use of parameter ranges and model complexities have significant impacts on frequency distribution of parameters, marginal posterior probability density functions, and estimates of uncertainty of simulated sensible and latent heat fluxes.Comparison between model complexity and parameter ranges shows that the former has more significant impacts on parameter and uncertainty estimations.
Liu, Jingwei; Liu, Yi; Xu, Meizhi
2015-01-01
Parameter estimation method of Jelinski-Moranda (JM) model based on weighted nonlinear least squares (WNLS) is proposed. The formulae of resolving the parameter WNLS estimation (WNLSE) are derived, and the empirical weight function and heteroscedasticity problem are discussed. The effects of optimization parameter estimation selection based on maximum likelihood estimation (MLE) method, least squares estimation (LSE) method and weighted nonlinear least squares estimation (WNLSE) method are al...
Directory of Open Access Journals (Sweden)
Fang-Rong Yan
2014-01-01
Full Text Available Population pharmacokinetic (PPK models play a pivotal role in quantitative pharmacology study, which are classically analyzed by nonlinear mixed-effects models based on ordinary differential equations. This paper describes the implementation of SDEs in population pharmacokinetic models, where parameters are estimated by a novel approximation of likelihood function. This approximation is constructed by combining the MCMC method used in nonlinear mixed-effects modeling with the extended Kalman filter used in SDE models. The analysis and simulation results show that the performance of the approximation of likelihood function for mixed-effects SDEs model and analysis of population pharmacokinetic data is reliable. The results suggest that the proposed method is feasible for the analysis of population pharmacokinetic data.
Comparison of parameter estimation algorithms in hydrological modelling
DEFF Research Database (Denmark)
Blasone, Roberta-Serena; Madsen, Henrik; Rosbjerg, Dan
2006-01-01
Local search methods have been applied successfully in calibration of simple groundwater models, but might fail in locating the optimum for models of increased complexity, due to the more complex shape of the response surface. Global search algorithms have been demonstrated to perform well...... for these types of models, although at a more expensive computational cost. The main purpose of this study is to investigate the performance of a global and a local parameter optimization algorithm, respectively, the Shuffled Complex Evolution (SCE) algorithm and the gradient-based Gauss......-Marquardt-Levenberg algorithm (implemented in the PEST software), when applied to a steady-state and a transient groundwater model. The results show that PEST can have severe problems in locating the global optimum and in being trapped in local regions of attractions. The global SCE procedure is, in general, more effective...
House thermal model parameter estimation method for Model Predictive Control applications
van Leeuwen, Richard Pieter; de Wit, J.B.; Fink, J.; Smit, Gerardus Johannes Maria
2015-01-01
In this paper we investigate thermal network models with different model orders applied to various Dutch low-energy house types with high and low interior thermal mass and containing floor heating. Parameter estimations are performed by using data from TRNSYS simulations. The paper discusses results
House thermal model parameter estimation method for Model Predictive Control applications
van Leeuwen, Richard Pieter; de Wit, J.B.; Fink, J.; Smit, Gerardus Johannes Maria
In this paper we investigate thermal network models with different model orders applied to various Dutch low-energy house types with high and low interior thermal mass and containing floor heating. Parameter estimations are performed by using data from TRNSYS simulations. The paper discusses results
A Note on Parameter Estimations of Panel Vector Autoregressive Models with Intercorrelation
Institute of Scientific and Technical Information of China (English)
Jian-hong Wu; Li-xing Zhu; Zai-xing Li
2009-01-01
This note considers parameter estimation for panel vector autoregressive models with intercorrela-tion. Conditional least squares estimators are derived and the asymptotic normality is established. A simulation is carried out for illustration.
Zhang, Xuefeng; Zhang, Shaoqing; Liu, Zhengyu; Wu, Xinrong; Han, Guijun
2016-09-01
Imperfect physical parameterization schemes are an important source of model bias in a coupled model and adversely impact the performance of model simulation. With a coupled ocean-atmosphere-land model of intermediate complexity, the impact of imperfect parameter estimation on model simulation with biased physics has been studied. Here, the biased physics is induced by using different outgoing longwave radiation schemes in the assimilation and "truth" models. To mitigate model bias, the parameters employed in the biased longwave radiation scheme are optimized using three different methods: least-squares parameter fitting (LSPF), single-valued parameter estimation and geography-dependent parameter optimization (GPO), the last two of which belong to the coupled model parameter estimation (CMPE) method. While the traditional LSPF method is able to improve the performance of coupled model simulations, the optimized parameter values from the CMPE, which uses the coupled model dynamics to project observational information onto the parameters, further reduce the bias of the simulated climate arising from biased physics. Further, parameters estimated by the GPO method can properly capture the climate-scale signal to improve the simulation of climate variability. These results suggest that the physical parameter estimation via the CMPE scheme is an effective approach to restrain the model climate drift during decadal climate predictions using coupled general circulation models.
Application of Parameter Estimation for Diffusions and Mixture Models
DEFF Research Database (Denmark)
Nolsøe, Kim
with the posterior score function. From an application point of view this methology is easy to apply, since the optimal estimating function G(;Xt1 ; : : : ;Xtn ) is equal to the classical optimal estimating function, plus a correction term which takes into account the prior information. The methology is particularly...... useful in situations where prior information is available and only few observations are present. The resulting estimators in some sense have better properties than the classical estimators. The second idea is to formulate Michael Sørensens method "prediction based estimating function" for measurement...... from a posterior distribution. The sampling algorithm is constructed from a Markov chain which allows the dimension of each sample to vary, this is obtained by utilizing the Reversible jumps methology proposed by Peter Green. Each sample is constructed such that the corresponding structures...
Bayesian estimation of regularization parameters for deformable surface models
Energy Technology Data Exchange (ETDEWEB)
Cunningham, G.S.; Lehovich, A.; Hanson, K.M.
1999-02-20
In this article the authors build on their past attempts to reconstruct a 3D, time-varying bolus of radiotracer from first-pass data obtained by the dynamic SPECT imager, FASTSPECT, built by the University of Arizona. The object imaged is a CardioWest total artificial heart. The bolus is entirely contained in one ventricle and its associated inlet and outlet tubes. The model for the radiotracer distribution at a given time is a closed surface parameterized by 482 vertices that are connected to make 960 triangles, with nonuniform intensity variations of radiotracer allowed inside the surface on a voxel-to-voxel basis. The total curvature of the surface is minimized through the use of a weighted prior in the Bayesian framework, as is the weighted norm of the gradient of the voxellated grid. MAP estimates for the vertices, interior intensity voxels and background count level are produced. The strength of the priors, or hyperparameters, are determined by maximizing the probability of the data given the hyperparameters, called the evidence. The evidence is calculated by first assuming that the posterior is approximately normal in the values of the vertices and voxels, and then by evaluating the integral of the multi-dimensional normal distribution. This integral (which requires evaluating the determinant of a covariance matrix) is computed by applying a recent algorithm from Bai et. al. that calculates the needed determinant efficiently. They demonstrate that the radiotracer is highly inhomogeneous in early time frames, as suspected in earlier reconstruction attempts that assumed a uniform intensity of radiotracer within the closed surface, and that the optimal choice of hyperparameters is substantially different for different time frames.
Sunbuloglu, Emin; Bozdag, Ergun; Toprak, Tuncer; Islak, Civan
2013-01-01
This study is aimed at setting a method of experimental parameter estimation for large-deforming nonlinear viscoelastic continuous fibre-reinforced composite material model. Specifically, arterial tissue was investigated during experimental research and parameter estimation studies, due to medical, scientific and socio-economic importance of soft tissue research. Using analytical formulations for specimens under combined inflation/extension/torsion on thick-walled cylindrical tubes, in vitro experiments were carried out with fresh sheep arterial segments, and parameter estimation procedures were carried out on experimental data. Model restrictions were pointed out using outcomes from parameter estimation. Needs for further studies that can be developed are discussed.
Comparing Three Estimation Methods for the Three-Parameter Logistic IRT Model
Lamsal, Sunil
2015-01-01
Different estimation procedures have been developed for the unidimensional three-parameter item response theory (IRT) model. These techniques include the marginal maximum likelihood estimation, the fully Bayesian estimation using Markov chain Monte Carlo simulation techniques, and the Metropolis-Hastings Robbin-Monro estimation. With each…
THE SUPERIORITY OF EMPIRICAL BAYES ESTIMATION OF PARAMETERS IN PARTITIONED NORMAL LINEAR MODEL
Institute of Scientific and Technical Information of China (English)
Zhang Weiping; Wei Laisheng
2008-01-01
In this article, the empirical Bayes (EB) estimators are constructed for the estimable functions of the parameters in partitioned normal linear model. The superiorities of the EB estimators over ordinary least-squares (LS) estimator are investigated under mean square error matrix (MSEM) criterion.
Multi-Variable Model-Based Parameter Estimation Model for Antenna Radiation Pattern Prediction
Deshpande, Manohar D.; Cravey, Robin L.
2002-01-01
A new procedure is presented to develop multi-variable model-based parameter estimation (MBPE) model to predict far field intensity of antenna. By performing MBPE model development procedure on a single variable at a time, the present method requires solution of smaller size matrices. The utility of the present method is demonstrated by determining far field intensity due to a dipole antenna over a frequency range of 100-1000 MHz and elevation angle range of 0-90 degrees.
Tian, Li-Ping; Liu, Lizhi; Wu, Fang-Xiang
2010-01-01
Derived from biochemical principles, molecular biological systems can be described by a group of differential equations. Generally these differential equations contain fractional functions plus polynomials (which we call improper fractional model) as reaction rates. As a result, molecular biological systems are nonlinear in both parameters and states. It is well known that it is challenging to estimate parameters nonlinear in a model. However, in fractional functions both the denominator and numerator are linear in the parameters while polynomials are also linear in parameters. Based on this observation, we develop an iterative linear least squares method for estimating parameters in biological systems modeled by improper fractional functions. The basic idea is to transfer optimizing a nonlinear least squares objective function into iteratively solving a sequence of linear least squares problems. The developed method is applied to the estimation of parameters in a metabolism system. The simulation results show the superior performance of the proposed method for estimating parameters in such molecular biological systems.
Energy Technology Data Exchange (ETDEWEB)
Bowong, Samuel, E-mail: sbowong@gmail.co [Laboratory of Applied Mathematics, Department of Mathematics and Computer Science, Faculty of Science, University of Douala, P.O. Box 24157 Douala (Cameroon); Postdam Institute for Climate Impact Research (PIK), Telegraphenberg A 31, 14412 Potsdam (Germany); Kurths, Jurgen [Postdam Institute for Climate Impact Research (PIK), Telegraphenberg A 31, 14412 Potsdam (Germany); Department of Physics, Humboldt Universitat zu Berlin, 12489 Berlin (Germany)
2010-10-04
We propose a method based on synchronization to identify the parameters and to estimate the underlying variables for an epidemic model from real data. We suggest an adaptive synchronization method based on observer approach with an effective guidance parameter to update rule design only from real data. In order, to validate the identifiability and estimation results, numerical simulations of a tuberculosis (TB) model using real data of the region of Center in Cameroon are performed to estimate the parameters and variables. This study shows that some tools of synchronization of nonlinear systems can help to deal with the parameter and state estimation problem in the field of epidemiology. We exploit the close link between mathematical modelling, structural identifiability analysis, synchronization, and parameter estimation to obtain biological insights into the system modelled.
Bowong, Samuel; Kurths, Jurgen
2010-10-01
We propose a method based on synchronization to identify the parameters and to estimate the underlying variables for an epidemic model from real data. We suggest an adaptive synchronization method based on observer approach with an effective guidance parameter to update rule design only from real data. In order, to validate the identifiability and estimation results, numerical simulations of a tuberculosis (TB) model using real data of the region of Center in Cameroon are performed to estimate the parameters and variables. This study shows that some tools of synchronization of nonlinear systems can help to deal with the parameter and state estimation problem in the field of epidemiology. We exploit the close link between mathematical modelling, structural identifiability analysis, synchronization, and parameter estimation to obtain biological insights into the system modelled.
Parameter estimation for LLDPE gas-phase reactor models
Directory of Open Access Journals (Sweden)
G. A. Neumann
2007-06-01
Full Text Available Product development and advanced control applications require models with good predictive capability. However, in some cases it is not possible to obtain good quality phenomenological models due to the lack of data or the presence of important unmeasured effects. The use of empirical models requires less investment in modeling, but implies the need for larger amounts of experimental data to generate models with good predictive capability. In this work, nonlinear phenomenological and empirical models were compared with respect to their capability to predict the melt index and polymer yield of a low-density polyethylene production process consisting of two fluidized bed reactors connected in series. To adjust the phenomenological model, the optimization algorithms based on the flexible polyhedron method of Nelder and Mead showed the best efficiency. To adjust the empirical model, the PLS model was more appropriate for polymer yield, and the melt index needed more nonlinearity like the QPLS models. In the comparison between these two types of models better results were obtained for the empirical models.
Zimmer, Christoph; Sahle, Sven
2016-04-01
Parameter estimation for models with intrinsic stochasticity poses specific challenges that do not exist for deterministic models. Therefore, specialized numerical methods for parameter estimation in stochastic models have been developed. Here, we study whether dedicated algorithms for stochastic models are indeed superior to the naive approach of applying the readily available least squares algorithm designed for deterministic models. We compare the performance of the recently developed multiple shooting for stochastic systems (MSS) method designed for parameter estimation in stochastic models, a stochastic differential equations based Bayesian approach and a chemical master equation based techniques with the least squares approach for parameter estimation in models of ordinary differential equations (ODE). As test data, 1000 realizations of the stochastic models are simulated. For each realization an estimation is performed with each method, resulting in 1000 estimates for each approach. These are compared with respect to their deviation to the true parameter and, for the genetic toggle switch, also their ability to reproduce the symmetry of the switching behavior. Results are shown for different set of parameter values of a genetic toggle switch leading to symmetric and asymmetric switching behavior as well as an immigration-death and a susceptible-infected-recovered model. This comparison shows that it is important to choose a parameter estimation technique that can treat intrinsic stochasticity and that the specific choice of this algorithm shows only minor performance differences.
A robust methodology for kinetic model parameter estimation for biocatalytic reactions
DEFF Research Database (Denmark)
Al-Haque, Naweed; Andrade Santacoloma, Paloma de Gracia; Lima Afonso Neto, Watson;
2012-01-01
Effective estimation of parameters in biocatalytic reaction kinetic expressions are very important when building process models to enable evaluation of process technology options and alternative biocatalysts. The kinetic models used to describe enzyme-catalyzed reactions generally include several...
PARAMETER-ESTIMATION FOR ARMA MODELS WITH INFINITE VARIANCE INNOVATIONS
MIKOSCH, T; GADRICH, T; KLUPPELBERG, C; ADLER, RJ
We consider a standard ARMA process of the form phi(B)X(t) = B(B)Z(t), where the innovations Z(t) belong to the domain of attraction of a stable law, so that neither the Z(t) nor the X(t) have a finite variance. Our aim is to estimate the coefficients of phi and theta. Since maximum likelihood
An improved method for nonlinear parameter estimation: a case study of the Rössler model
He, Wen-Ping; Wang, Liu; Jiang, Yun-Di; Wan, Shi-Quan
2016-08-01
Parameter estimation is an important research topic in nonlinear dynamics. Based on the evolutionary algorithm (EA), Wang et al. (2014) present a new scheme for nonlinear parameter estimation and numerical tests indicate that the estimation precision is satisfactory. However, the convergence rate of the EA is relatively slow when multiple unknown parameters in a multidimensional dynamical system are estimated simultaneously. To solve this problem, an improved method for parameter estimation of nonlinear dynamical equations is provided in the present paper. The main idea of the improved scheme is to use all of the known time series for all of the components in some dynamical equations to estimate the parameters in single component one by one, instead of estimating all of the parameters in all of the components simultaneously. Thus, we can estimate all of the parameters stage by stage. The performance of the improved method was tested using a classic chaotic system—Rössler model. The numerical tests show that the amended parameter estimation scheme can greatly improve the searching efficiency and that there is a significant increase in the convergence rate of the EA, particularly for multiparameter estimation in multidimensional dynamical equations. Moreover, the results indicate that the accuracy of parameter estimation and the CPU time consumed by the presented method have no obvious dependence on the sample size.
Comparison of parameter estimation algorithms in hydrological modelling
DEFF Research Database (Denmark)
Blasone, Roberta-Serena; Madsen, Henrik; Rosbjerg, Dan
2006-01-01
Local search methods have been applied successfully in calibration of simple groundwater models, but might fail in locating the optimum for models of increased complexity, due to the more complex shape of the response surface. Global search algorithms have been demonstrated to perform well for th...
Direct Estimation of Physical Parameters in Nonlinear Loudspeaker Models
DEFF Research Database (Denmark)
Knudsen, Morten
1994-01-01
For better loudspeaker unit and loudspeaker system design, improvements of the traditional linear, low frequency model of the electro-dynamic loudspeaker are essential.......For better loudspeaker unit and loudspeaker system design, improvements of the traditional linear, low frequency model of the electro-dynamic loudspeaker are essential....
Mathematical modelling in blood coagulation : simulation and parameter estimation
W.J.H. Stortelder (Walter); P.W. Hemker (Piet); H.C. Hemker
1997-01-01
textabstractThis paper describes the mathematical modelling of a part of the blood coagulation mechanism. The model includes the activation of factor X by a purified enzyme from Russel's Viper Venom (RVV), factor V and prothrombin, and also comprises the inactivation of the products formed. In this
Parameter Estimation Through Ignorance
Du, Hailiang
2015-01-01
Dynamical modelling lies at the heart of our understanding of physical systems. Its role in science is deeper than mere operational forecasting, in that it allows us to evaluate the adequacy of the mathematical structure of our models. Despite the importance of model parameters, there is no general method of parameter estimation outside linear systems. A new relatively simple method of parameter estimation for nonlinear systems is presented, based on variations in the accuracy of probability forecasts. It is illustrated on the Logistic Map, the Henon Map and the 12-D Lorenz96 flow, and its ability to outperform linear least squares in these systems is explored at various noise levels and sampling rates. As expected, it is more effective when the forecast error distributions are non-Gaussian. The new method selects parameter values by minimizing a proper, local skill score for continuous probability forecasts as a function of the parameter values. This new approach is easier to implement in practice than alter...
Parameter estimation and uncertainty assessment in hydrological modelling
DEFF Research Database (Denmark)
Blasone, Roberta-Serena
En rationel og effektiv vandressourceadministration forudsætter indsigt i og forståelse af de hydrologiske processer samt præcise opgørelser af de tilgængelige vandmængder i både overfladevands- og grundvandsmagasiner. Til det formål er hydrologiske modeller et uomgængeligt værktøj. I de senest 10......-20 år er der forsket meget i hydrologiske processer og især i implementeringen af denne viden i numeriske modelsystemer. Dette har ledt til modeller af stigende kompleksitet. Samtidig er en række forskellige teknikker til at estimere modelparametre og til at skønne usikkerheden på modelprædiktioner...... hertil har været de lange beregningstider og omfattende datakrav, der karakteriserer denne type modeller, og som udgør et stort problem ved rekursiv anvendelse af modellerne. Dertil kommer, at de komplekse modeller sædvanligvis ikke er frit tilgængelige på samme måde som de simple nedbør...
Continuum model for masonry: Parameter estimation and validation
Lourenço, P.B.; Rots, J.G.; Blaauwendraad, J.
1998-01-01
A novel yield criterion that includes different strengths along each material axis is presented. The criterion includes two different fracture energies in tension and two different fracture energies in compression. The ability of the model to represent the inelastic behavior of orthotropic materials
Parameter estimation and uncertainty assessment in hydrological modelling
DEFF Research Database (Denmark)
Blasone, Roberta-Serena
En rationel og effektiv vandressourceadministration forudsætter indsigt i og forståelse af de hydrologiske processer samt præcise opgørelser af de tilgængelige vandmængder i både overfladevands- og grundvandsmagasiner. Til det formål er hydrologiske modeller et uomgængeligt værktøj. I de senest 10......-20 år er der forsket meget i hydrologiske processer og især i implementeringen af denne viden i numeriske modelsystemer. Dette har ledt til modeller af stigende kompleksitet. Samtidig er en række forskellige teknikker til at estimere modelparametre og til at skønne usikkerheden på modelprædiktioner...... hertil har været de lange beregningstider og omfattende datakrav, der karakteriserer denne type modeller, og som udgør et stort problem ved rekursiv anvendelse af modellerne. Dertil kommer, at de komplekse modeller sædvanligvis ikke er frit tilgængelige på samme måde som de simple nedbør...
Development of simple kinetic models and parameter estimation for ...
African Journals Online (AJOL)
PANCHIGA
Key words: Exponential feed, growth modeling, Monod kinetic equation, Pichia pastoris, recombinant human ... Author(s) agree that this article remains permanently open access under the terms of the Creative Commons .... Methanol was the only energy and carbon source ..... A potential explanation for the decline in cell.
Parameter Estimation for an Electric Arc Furnace Model Using Maximum Likelihood
Directory of Open Access Journals (Sweden)
Jesser J. Marulanda-Durango
2012-12-01
Full Text Available In this paper, we present a methodology for estimating the parameters of a model for an electrical arc furnace, by using maximum likelihood estimation. Maximum likelihood estimation is one of the most employed methods for parameter estimation in practical settings. The model for the electrical arc furnace that we consider, takes into account the non-periodic and non-linear variations in the voltage-current characteristic. We use NETLAB, an open source MATLAB® toolbox, for solving a set of non-linear algebraic equations that relate all the parameters to be estimated. Results obtained through simulation of the model in PSCADTM, are contrasted against real measurements taken during the furnance's most critical operating point. We show how the model for the electrical arc furnace, with appropriate parameter tuning, captures with great detail the real voltage and current waveforms generated by the system. Results obtained show a maximum error of 5% for the current's root mean square error.
Parameter Estimation for Dynamic Model of the Financial System
Directory of Open Access Journals (Sweden)
Veronika Novotná
2015-01-01
Full Text Available Economy can be considered a large, open system which is influenced by fluctuations, both internal and external. Based on non-linear dynamics theory, the dynamic models of a financial system try to provide a new perspective by explaining the complicated behaviour of the system not as a result of external influences or random behaviour, but as a result of the behaviour and trends of the system’s internal structures. The present article analyses a chaotic financial system from the point of view of determining the time delay of the model variables – the interest rate, investment demand, and price index. The theory is briefly explained in the first chapters of the paper and serves as a basis for formulating the relations. This article aims to determine the appropriate length of time delay variables in a dynamic model of the financial system in order to express the real economic situation and respect the effect of the history of factors under consideration. The determination of the delay length is carried out for the time series representing Euro area. The methodology for the determination of the time delay is illustrated by a concrete example.
Directory of Open Access Journals (Sweden)
Houda Salhi
2016-01-01
Full Text Available This paper deals with the parameter estimation problem for multivariable nonlinear systems described by MIMO state-space Wiener models. Recursive parameters and state estimation algorithms are presented using the least squares technique, the adjustable model, and the Kalman filter theory. The basic idea is to estimate jointly the parameters, the state vector, and the internal variables of MIMO Wiener models based on a specific decomposition technique to extract the internal vector and avoid problems related to invertibility assumption. The effectiveness of the proposed algorithms is shown by an illustrative simulation example.
Monte-Carlo Inversion of Travel-Time Data for the Estimation of Weld Model Parameters
Hunter, A. J.; Drinkwater, B. W.; Wilcox, P. D.
2011-06-01
The quality of ultrasonic array imagery is adversely affected by uncompensated variations in the medium properties. A method for estimating the parameters of a general model of an inhomogeneous anisotropic medium is described. The model is comprised of a number of homogeneous sub-regions with unknown anisotropy. Bayesian estimation of the unknown model parameters is performed via a Monte-Carlo Markov chain using the Metropolis-Hastings algorithm. Results are demonstrated using simulated weld data.
Estimation of Model and Parameter Uncertainty For A Distributed Rainfall-runoff Model
Engeland, K.
The distributed rainfall-runoff model Ecomag is applied as a regional model for nine catchments in the NOPEX area in Sweden. Ecomag calculates streamflow on a daily time resolution. The posterior distribution of the model parameters is conditioned on the observed streamflow in all nine catchments, and calculated using Bayesian statistics. The distribution is estimated by Markov chain Monte Carlo (MCMC). The Bayesian method requires a definition of the likelihood of the parameters. Two alter- native formulations are used. The first formulation is a subjectively chosen objective function describing the goodness of fit between the simulated and observed streamflow as it is used in the GLUE framework. The second formulation is to use a more statis- tically correct likelihood function that describes the simulation errors. The simulation error is defined as the difference between log-transformed observed and simulated streamflows. A statistical model for the simulation errors is constructed. Some param- eters are dependent on the catchment, while others depend on climate. The statistical and the hydrological parameters are estimated simultaneously. Confidence intervals, due to the uncertainty of the Ecomag parameters, for the simulated streamflow are compared for the two likelihood functions. Confidence intervals based on the statis- tical model for the simulation errors are also calculated. The results indicate that the parameter uncertainty depends on the formulation of the likelihood function. The sub- jectively chosen likelihood function gives relatively wide confidence intervals whereas the 'statistical' likelihood function gives more narrow confidence intervals. The statis- tical model for the simulation errors indicates that the structural errors of the model are as least as important as the parameter uncertainty.
Parameter Estimation and Model Validation of Nonlinear Dynamical Networks
Energy Technology Data Exchange (ETDEWEB)
Abarbanel, Henry [Univ. of California, San Diego, CA (United States); Gill, Philip [Univ. of California, San Diego, CA (United States)
2015-03-31
In the performance period of this work under a DOE contract, the co-PIs, Philip Gill and Henry Abarbanel, developed new methods for statistical data assimilation for problems of DOE interest, including geophysical and biological problems. This included numerical optimization algorithms for variational principles, new parallel processing Monte Carlo routines for performing the path integrals of statistical data assimilation. These results have been summarized in the monograph: “Predicting the Future: Completing Models of Observed Complex Systems” by Henry Abarbanel, published by Spring-Verlag in June 2013. Additional results and details have appeared in the peer reviewed literature.
Estimation of the scale parameter of gamma model in presence of outlier observations
Directory of Open Access Journals (Sweden)
M. E. Ghitany
1990-01-01
Full Text Available This paper considers the Bayesian point estimation of the scale parameter for a two-parameter gamma life-testing model in presence of several outlier observations in the data. The Bayesian analysis is carried out under the assumption of squared error loss function and fixed or random shape parameter.
Directory of Open Access Journals (Sweden)
Liu Gang
2009-01-01
Full Text Available By using the methods of linear algebra and matrix inequality theory, we obtain the characterization of admissible estimators in the general multivariate linear model with respect to inequality restricted parameter set. In the classes of homogeneous and general linear estimators, the necessary and suffcient conditions that the estimators of regression coeffcient function are admissible are established.
Zayane, Chadia
2014-06-01
In this paper, we address a special case of state and parameter estimation, where the system can be put on a cascade form allowing to estimate the state components and the set of unknown parameters separately. Inspired by the nonlinear Balloon hemodynamic model for functional Magnetic Resonance Imaging problem, we propose a hierarchical approach. The system is divided into two subsystems in cascade. The state and input are first estimated from a noisy measured signal using an adaptive observer. The obtained input is then used to estimate the parameters of a linear system using the modulating functions method. Some numerical results are presented to illustrate the efficiency of the proposed method.
ASYMPTOTIC NORMALITY OF PARAMETERS ESTIMATION IN EV MODEL WITH REPLICATED OBSERVATIONS
Institute of Scientific and Technical Information of China (English)
张三国; 陈希孺
2002-01-01
This paper based on the essay [1], studies in case that replicated observations are available in some experimental points, the parameters estimation of one dimensional linear errors-in-variables (EV) models. Asymptotic normality is established.
Directory of Open Access Journals (Sweden)
PEIXOTO F. C.
1999-01-01
Full Text Available Fragmentation kinetics is employed to model a continuous reactive mixture. An explicit solution is found and experimental data on the catalytic cracking of a mixture of alkanes are used for deactivation and kinetic parameter estimation.
Markov chain Monte Carlo approach to parameter estimation in the FitzHugh-Nagumo model.
Jensen, Anders Chr; Ditlevsen, Susanne; Kessler, Mathieu; Papaspiliopoulos, Omiros
2012-10-01
Excitability is observed in a variety of natural systems, such as neuronal dynamics, cardiovascular tissues, or climate dynamics. The stochastic FitzHugh-Nagumo model is a prominent example representing an excitable system. To validate the practical use of a model, the first step is to estimate model parameters from experimental data. This is not an easy task because of the inherent nonlinearity necessary to produce the excitable dynamics, and because the two coordinates of the model are moving on different time scales. Here we propose a Bayesian framework for parameter estimation, which can handle multidimensional nonlinear diffusions with large time scale separation. The estimation method is illustrated on simulated data.
Scheibehenne, Benjamin; Pachur, Thorsten
2015-04-01
To be useful, cognitive models with fitted parameters should show generalizability across time and allow accurate predictions of future observations. It has been proposed that hierarchical procedures yield better estimates of model parameters than do nonhierarchical, independent approaches, because the formers' estimates for individuals within a group can mutually inform each other. Here, we examine Bayesian hierarchical approaches to evaluating model generalizability in the context of two prominent models of risky choice-cumulative prospect theory (Tversky & Kahneman, 1992) and the transfer-of-attention-exchange model (Birnbaum & Chavez, 1997). Using empirical data of risky choices collected for each individual at two time points, we compared the use of hierarchical versus independent, nonhierarchical Bayesian estimation techniques to assess two aspects of model generalizability: parameter stability (across time) and predictive accuracy. The relative performance of hierarchical versus independent estimation varied across the different measures of generalizability. The hierarchical approach improved parameter stability (in terms of a lower absolute discrepancy of parameter values across time) and predictive accuracy (in terms of deviance; i.e., likelihood). With respect to test-retest correlations and posterior predictive accuracy, however, the hierarchical approach did not outperform the independent approach. Further analyses suggested that this was due to strong correlations between some parameters within both models. Such intercorrelations make it difficult to identify and interpret single parameters and can induce high degrees of shrinkage in hierarchical models. Similar findings may also occur in the context of other cognitive models of choice.
SBML-PET-MPI: a parallel parameter estimation tool for Systems Biology Markup Language based models.
Zi, Zhike
2011-04-01
Parameter estimation is crucial for the modeling and dynamic analysis of biological systems. However, implementing parameter estimation is time consuming and computationally demanding. Here, we introduced a parallel parameter estimation tool for Systems Biology Markup Language (SBML)-based models (SBML-PET-MPI). SBML-PET-MPI allows the user to perform parameter estimation and parameter uncertainty analysis by collectively fitting multiple experimental datasets. The tool is developed and parallelized using the message passing interface (MPI) protocol, which provides good scalability with the number of processors. SBML-PET-MPI is freely available for non-commercial use at http://www.bioss.uni-freiburg.de/cms/sbml-pet-mpi.html or http://sites.google.com/site/sbmlpetmpi/.
Estimation of Water Quality Parameters Using the Regression Model with Fuzzy K-Means Clustering
Directory of Open Access Journals (Sweden)
Muntadher A. SHAREEF
2014-07-01
Full Text Available the traditional methods in remote sensing used for monitoring and estimating pollutants are generally relied on the spectral response or scattering reflected from water. In this work, a new method has been proposed to find contaminants and determine the Water Quality Parameters (WQPs based on theories of the texture analysis. Empirical statistical models have been developed to estimate and classify contaminants in the water. Gray Level Co-occurrence Matrix (GLCM is used to estimate six texture parameters: contrast, correlation, energy, homogeneity, entropy and variance. These parameters are used to estimate the regression model with three WQPs. Finally, the fuzzy K-means clustering was used to generalize the water quality estimation on all segmented image. Using the in situ measurements and IKONOS data, the obtained results show that texture parameters and high resolution remote sensing able to monitor and predicate the distribution of WQPs in large rivers.
Morris, A. Terry
1999-01-01
This paper examines various sources of error in MIT's improved top oil temperature rise over ambient temperature model and estimation process. The sources of error are the current parameter estimation technique, quantization noise, and post-processing of the transformer data. Results from this paper will show that an output error parameter estimation technique should be selected to replace the current least squares estimation technique. The output error technique obtained accurate predictions of transformer behavior, revealed the best error covariance, obtained consistent parameter estimates, and provided for valid and sensible parameters. This paper will also show that the output error technique should be used to minimize errors attributed to post-processing (decimation) of the transformer data. Models used in this paper are validated using data from a large transformer in service.
Nonlinear functional response parameter estimation in a stochastic predator-prey model.
Gilioli, Gianni; Pasquali, Sara; Ruggeri, Fabrizio
2012-01-01
Parameter estimation for the functional response of predator-prey systems is a critical methodological problem in population ecology. In this paper we consider a stochastic predator-prey system with non-linear Ivlev functional response and propose a method for model parameter estimation based on time series of field data. We tackle the problem of parameter estimation using a Bayesian approach relying on a Markov Chain Monte Carlo algorithm. The efficiency of the method is tested on a set of simulated data. Then, the method is applied to a predator-prey system of importance for Integrated Pest Management and biological control, the pest mite Tetranychus urticae and the predatory mite Phytoseiulus persimilis. The model is estimated on a dataset obtained from a field survey. Finally, the estimated model is used to forecast predator-prey dynamics in similar fields, with slightly different initial conditions.
Revisiting Cosmological parameter estimation
Prasad, Jayanti
2014-01-01
Constraining theoretical models with measuring the parameters of those from cosmic microwave background (CMB) anisotropy data is one of the most active areas in cosmology. WMAP, Planck and other recent experiments have shown that the six parameters standard $\\Lambda$CDM cosmological model still best fits the data. Bayesian methods based on Markov-Chain Monte Carlo (MCMC) sampling have been playing leading role in parameter estimation from CMB data. In one of the recent studies \\cite{2012PhRvD..85l3008P} we have shown that particle swarm optimization (PSO) which is a population based search procedure can also be effectively used to find the cosmological parameters which are best fit to the WMAP seven year data. In the present work we show that PSO not only can find the best-fit point, it can also sample the parameter space quite effectively, to the extent that we can use the same analysis pipeline to process PSO sampled points which is used to process the points sampled by Markov Chains, and get consistent res...
Sun, Xiaodian; Jin, Li; Xiong, Momiao
2008-01-01
It is system dynamics that determines the function of cells, tissues and organisms. To develop mathematical models and estimate their parameters are an essential issue for studying dynamic behaviors of biological systems which include metabolic networks, genetic regulatory networks and signal transduction pathways, under perturbation of external stimuli. In general, biological dynamic systems are partially observed. Therefore, a natural way to model dynamic biological systems is to employ nonlinear state-space equations. Although statistical methods for parameter estimation of linear models in biological dynamic systems have been developed intensively in the recent years, the estimation of both states and parameters of nonlinear dynamic systems remains a challenging task. In this report, we apply extended Kalman Filter (EKF) to the estimation of both states and parameters of nonlinear state-space models. To evaluate the performance of the EKF for parameter estimation, we apply the EKF to a simulation dataset and two real datasets: JAK-STAT signal transduction pathway and Ras/Raf/MEK/ERK signaling transduction pathways datasets. The preliminary results show that EKF can accurately estimate the parameters and predict states in nonlinear state-space equations for modeling dynamic biochemical networks.
DEFF Research Database (Denmark)
Ottosen, Thor Bjørn; Ketzel, Matthias; Skov, Henrik
2016-01-01
Pollution Model (OSPM®). To assess the predictive validity of the model, the data is split into an estimation and a prediction data set using two data splitting approaches and data preparation techniques (clustering and outlier detection) are analysed. The sensitivity analysis, being part......Mathematical models are increasingly used in environmental science thus increasing the importance of uncertainty and sensitivity analyses. In the present study, an iterative parameter estimation and identifiability analysis methodology is applied to an atmospheric model – the Operational Street...
Directory of Open Access Journals (Sweden)
Rutao Luo
Full Text Available Mathematical models based on ordinary differential equations (ODE have had significant impact on understanding HIV disease dynamics and optimizing patient treatment. A model that characterizes the essential disease dynamics can be used for prediction only if the model parameters are identifiable from clinical data. Most previous parameter identification studies for HIV have used sparsely sampled data from the decay phase following the introduction of therapy. In this paper, model parameters are identified from frequently sampled viral-load data taken from ten patients enrolled in the previously published AutoVac HAART interruption study, providing between 69 and 114 viral load measurements from 3-5 phases of viral decay and rebound for each patient. This dataset is considerably larger than those used in previously published parameter estimation studies. Furthermore, the measurements come from two separate experimental conditions, which allows for the direct estimation of drug efficacy and reservoir contribution rates, two parameters that cannot be identified from decay-phase data alone. A Markov-Chain Monte-Carlo method is used to estimate the model parameter values, with initial estimates obtained using nonlinear least-squares methods. The posterior distributions of the parameter estimates are reported and compared for all patients.
Directory of Open Access Journals (Sweden)
Gang Zhang
2017-07-01
Full Text Available The Sacramento model is widely utilized in hydrological forecast, of which the accuracy and performance are primarily determined by the model parameters, indicating the key role of parameter estimation. This paper presents a multi-step parameter estimation method, which divides the parameter estimation of Sacramento model into three steps and realizes optimization step by step. We firstly use the immune clonal selection algorithm (ICSA to solve the non-liner objective function of parameter estimation, and compare the parameter calibration result of ideal artificial data with Shuffled Complex Evolution (SCE-UA, Parallel Genetic Algorithm (PGA, and Serial Master-slaver Swarms Shuffling Evolution Algorithm Based on Particle Swarms Optimization (SMSE-PSO. The comparison result shows that ICSA has the best convergence, efficiency and precision. Then we apply ICSA to the parameter estimation of single-step and multi-step Sacramento model and simulate 32 floods based on application examples of Dongyang and Tantou river basins for validation. It is clearly shown that the results of multi-step method based on ICSA show higher accuracy and 100% qualified rate, indicating its higher precision and reliability, which has great potential to improve Sacramento model and hydrological forecast.
Optimal experiment selection for parameter estimation in biological differential equation models
Directory of Open Access Journals (Sweden)
Transtrum Mark K
2012-07-01
Full Text Available Abstract Background Parameter estimation in biological models is a common yet challenging problem. In this work we explore the problem for gene regulatory networks modeled by differential equations with unknown parameters, such as decay rates, reaction rates, Michaelis-Menten constants, and Hill coefficients. We explore the question to what extent parameters can be efficiently estimated by appropriate experimental selection. Results A minimization formulation is used to find the parameter values that best fit the experiment data. When the data is insufficient, the minimization problem often has many local minima that fit the data reasonably well. We show that selecting a new experiment based on the local Fisher Information of one local minimum generates additional data that allows one to successfully discriminate among the many local minima. The parameters can be estimated to high accuracy by iteratively performing minimization and experiment selection. We show that the experiment choices are roughly independent of which local minima is used to calculate the local Fisher Information. Conclusions We show that by an appropriate choice of experiments, one can, in principle, efficiently and accurately estimate all the parameters of gene regulatory network. In addition, we demonstrate that appropriate experiment selection can also allow one to restrict model predictions without constraining the parameters using many fewer experiments. We suggest that predicting model behaviors and inferring parameters represent two different approaches to model calibration with different requirements on data and experimental cost.
A framework for scalable parameter estimation of gene circuit models using structural information
Kuwahara, Hiroyuki
2013-06-21
Motivation: Systematic and scalable parameter estimation is a key to construct complex gene regulatory models and to ultimately facilitate an integrative systems biology approach to quantitatively understand the molecular mechanisms underpinning gene regulation. Results: Here, we report a novel framework for efficient and scalable parameter estimation that focuses specifically on modeling of gene circuits. Exploiting the structure commonly found in gene circuit models, this framework decomposes a system of coupled rate equations into individual ones and efficiently integrates them separately to reconstruct the mean time evolution of the gene products. The accuracy of the parameter estimates is refined by iteratively increasing the accuracy of numerical integration using the model structure. As a case study, we applied our framework to four gene circuit models with complex dynamics based on three synthetic datasets and one time series microarray data set. We compared our framework to three state-of-the-art parameter estimation methods and found that our approach consistently generated higher quality parameter solutions efficiently. Although many general-purpose parameter estimation methods have been applied for modeling of gene circuits, our results suggest that the use of more tailored approaches to use domain-specific information may be a key to reverse engineering of complex biological systems. The Author 2013.
Institute of Scientific and Technical Information of China (English)
无
2009-01-01
There are two kinds of methods in researching the crust deformation: geophysical method and geometrical (or observational) method. Considerable differences usually exist between the two kinds of results, because of the datum differences, geophysical model errors, observational model errors, and so on. Thus, it is reasonable to combine the two kinds of information to collect the crust deformation information. To use the reliable geometrical and geophysical information, we have to control the observational and geophysical model error influences on the estimated deformation parameters, and to balance their contributions to the evaluated parameters. A hybrid estimation strategy is proposed here for evaluating the deformation parameters employing an adaptively robust filtering. The effects of measurement outliers on the estimated parameters are controlled by robust equivalent weights. Adaptive factors are introduced to balance the contribution of the geophysical model information and the geometrical measurements to the model parameters. The datum for the local deformation analysis is mainly determined by the highly accurate IGS station velocities. The hybrid estimation strategy is applied in an actual GPS monitoring network. It is shown that the hybrid technique employs locally repeated geometrical displacements to reduce the displacement errors caused by the mis-modeling of geophysical technique, and thus improves the precision of the estimated crust deformation parameters.
Aslan, Serdar; Taylan Cemgil, Ali; Akın, Ata
2016-08-01
Objective. In this paper, we aimed for the robust estimation of the parameters and states of the hemodynamic model by using blood oxygen level dependent signal. Approach. In the fMRI literature, there are only a few successful methods that are able to make a joint estimation of the states and parameters of the hemodynamic model. In this paper, we implemented a maximum likelihood based method called the particle smoother expectation maximization (PSEM) algorithm for the joint state and parameter estimation. Main results. Former sequential Monte Carlo methods were only reliable in the hemodynamic state estimates. They were claimed to outperform the local linearization (LL) filter and the extended Kalman filter (EKF). The PSEM algorithm is compared with the most successful method called square-root cubature Kalman smoother (SCKS) for both state and parameter estimation. SCKS was found to be better than the dynamic expectation maximization (DEM) algorithm, which was shown to be a better estimator than EKF, LL and particle filters. Significance. PSEM was more accurate than SCKS for both the state and the parameter estimation. Hence, PSEM seems to be the most accurate method for the system identification and state estimation for the hemodynamic model inversion literature. This paper do not compare its results with Tikhonov-regularized Newton—CKF (TNF-CKF), a recent robust method which works in filtering sense.
Directory of Open Access Journals (Sweden)
Bjørn A.J. Angelsen
1991-01-01
Full Text Available A method for noninvasive estimation of regurgitant orifice and volume in aortic regurgitation is proposed and tested in anaesthesized open chested pigs. The method can be used with noninvasive measurement of regurgitant jet velocity with continuous wave ultrasound Doppler measurements together with cuff measurements of systolic and diastolic systemic pressure in the arm. These measurements are then used for parameter estimation in a Windkessel-like model which include the regurgitant orifice as a parameter. The aortic volume compliance and the peripheral resistance are also included as parameters and estimated in the same process. For the test of the method, invasive measurements in the open chest pigs are used. Electromagnetic flow measurements in the ascending aorta and pulmonary artery are used for control, and a correlation between regurgitant volume obtained from parameter estimation and electromagnetic flow measurements of 0.95 over a range from 2.1 to 17.8 mL is obtained.
Bailer-Jones, C A L
2009-01-01
I introduce an algorithm for estimating parameters from multidimensional data based on forward modelling. In contrast to many machine learning approaches it avoids fitting an inverse model and the problems associated with this. The algorithm makes explicit use of the sensitivities of the data to the parameters, with the goal of better treating parameters which only have a weak impact on the data. The forward modelling approach provides uncertainty (full covariance) estimates in the predicted parameters as well as a goodness-of-fit for observations. I demonstrate the algorithm, ILIUM, with the estimation of stellar astrophysical parameters (APs) from simulations of the low resolution spectrophotometry to be obtained by Gaia. The AP accuracy is competitive with that obtained by a support vector machine. For example, for zero extinction stars covering a wide range of metallicity, surface gravity and temperature, ILIUM can estimate Teff to an accuracy of 0.3% at G=15 and to 4% for (lower signal-to-noise ratio) sp...
A Regularized SNPOM for Stable Parameter Estimation of RBF-AR(X) Model.
Zeng, Xiaoyong; Peng, Hui; Zhou, Feng
2017-01-20
Recently, the radial basis function (RBF) network-style coefficients AutoRegressive (with exogenous inputs) [RBF-AR(X)] model identified by the structured nonlinear parameter optimization method (SNPOM) has attracted considerable interest because of its significant performance in nonlinear system modeling. However, this promising technique may occasionally confront the problem that the parameters are divergent in the optimization process, which may be a potential issue ignored by most researchers. In this paper, a regularized SNPOM, together with the regularization parameter detection technique, is presented to estimate the parameters of RBF-AR(X) models. This approach first separates the parameters of an RBF-AR(X) model into a linear parameters set and a nonlinear parameters set, and then combines a gradient-based nonlinear optimization algorithm for estimating the nonlinear parameters and the regularized least squares method for estimating the linear parameters. Several examples demonstrate that the proposed approach is effective to cope with the potential unstable problem in the parameters search process, and may also yield better or similar multistep forecasting accuracy and better robustness than the previous method.
Energy Technology Data Exchange (ETDEWEB)
Wagener, T; Hogue, T; Schaake, J; Duan, Q; Gupta, H; Andreassian, V; Hall, A; Leavesley, G
2006-05-08
The Model Parameter Estimation Experiment (MOPEX) is an international project aimed at developing enhanced techniques for the a priori estimation of parameters in hydrologic models and in land surface parameterization schemes connected to atmospheric models. The MOPEX science strategy involves: database creation, a priori parameter estimation methodology development, parameter refinement or calibration, and the demonstration of parameter transferability. A comprehensive MOPEX database has been developed that contains historical hydrometeorological data and land surface characteristics data for many hydrologic basins in the United States (US) and in other countries. This database is being continuously expanded to include basins from various hydroclimatic regimes throughout the world. MOPEX research has largely been driven by a series of international workshops that have brought interested hydrologists and land surface modelers together to exchange knowledge and experience in developing and applying parameter estimation techniques. With its focus on parameter estimation, MOPEX plays an important role in the international context of other initiatives such as GEWEX, PUB and PILPS. This paper outlines the MOPEX initiative, discusses its role in the scientific community and briefly states future directions.
Marginal Maximum A Posteriori Item Parameter Estimation for the Generalized Graded Unfolding Model
Roberts, James S.; Thompson, Vanessa M.
2011-01-01
A marginal maximum a posteriori (MMAP) procedure was implemented to estimate item parameters in the generalized graded unfolding model (GGUM). Estimates from the MMAP method were compared with those derived from marginal maximum likelihood (MML) and Markov chain Monte Carlo (MCMC) procedures in a recovery simulation that varied sample size,…
Models to estimate genetic parameters in crossbred dairy cattle populations under selection.
Werf, van der J.H.J.
1990-01-01
Estimates of genetic parameters needed to control breeding programs, have to be regularly updated, due to changing environments and ongoing selection and crossing of populations. Restricted maximum likelihood methods optimally provide these estimates, assuming that the statisticalgenetic model u
Parameter estimation and sensitivity analysis for a mathematical model with time delays of leukemia
Cândea, Doina; Halanay, Andrei; Rǎdulescu, Rodica; Tǎlmaci, Rodica
2017-01-01
We consider a system of nonlinear delay differential equations that describes the interaction between three competing cell populations: healthy, leukemic and anti-leukemia T cells involved in Chronic Myeloid Leukemia (CML) under treatment with Imatinib. The aim of this work is to establish which model parameters are the most important in the success or failure of leukemia remission under treatment using a sensitivity analysis of the model parameters. For the most significant parameters of the model which affect the evolution of CML disease during Imatinib treatment we try to estimate the realistic values using some experimental data. For these parameters, steady states are calculated and their stability is analyzed and biologically interpreted.
Parameters Estimation for the Spherical Model of the Human Knee Joint Using Vector Method
Ciszkiewicz, A.; Knapczyk, J.
2014-08-01
Position and displacement analysis of a spherical model of a human knee joint using the vector method was presented. Sensitivity analysis and parameter estimation were performed using the evolutionary algorithm method. Computer simulations for the mechanism with estimated parameters proved the effectiveness of the prepared software. The method itself can be useful when solving problems concerning the displacement and loads analysis in the knee joint
Timoshenko beam modeling for parameter estimation of NASA Mini-Mast truss
Shen, Ji Y.; Huang, Jen-Kuang; Taylor, L. W., Jr.
1993-01-01
In this paper a distributed parameter model for the estimation of modal characteristics of NASA Mini-Mast truss is proposed. A closed-form solution of the Timoshenko beam equation, for a uniform cantilevered beam with two concentrated masses, is derived so that the procedure and the computational effort for the estimation of modal characteristics are improved. A maximum likelihood estimator for the Timoshenko beam model is also developed. The resulting estimates from test data by using Timoshenko beam model are found to be comparable to those derived from other approaches.
Optomechanical parameter estimation
Ang, Shan Zheng; Bowen, Warwick P; Tsang, Mankei
2013-01-01
We propose a statistical framework for the problem of parameter estimation from a noisy optomechanical system. The Cram\\'er-Rao lower bound on the estimation errors in the long-time limit is derived and compared with the errors of radiometer and expectation-maximization (EM) algorithms in the estimation of the force noise power. When applied to experimental data, the EM estimator is found to have the lowest error and follow the Cram\\'er-Rao bound most closely. With its ability to estimate most of the system parameters, the EM algorithm is envisioned to be useful for optomechanical sensing, atomic magnetometry, and classical or quantum system identification applications in general.
Influences of observation errors in eddy flux data on inverse model parameter estimation
Directory of Open Access Journals (Sweden)
G. Lasslop
2008-09-01
Full Text Available Eddy covariance data are increasingly used to estimate parameters of ecosystem models. For proper maximum likelihood parameter estimates the error structure in the observed data has to be fully characterized. In this study we propose a method to characterize the random error of the eddy covariance flux data, and analyse error distribution, standard deviation, cross- and autocorrelation of CO_{2} and H_{2}O flux errors at four different European eddy covariance flux sites. Moreover, we examine how the treatment of those errors and additional systematic errors influence statistical estimates of parameters and their associated uncertainties with three models of increasing complexity – a hyperbolic light response curve, a light response curve coupled to water fluxes and the SVAT scheme BETHY. In agreement with previous studies we find that the error standard deviation scales with the flux magnitude. The previously found strongly leptokurtic error distribution is revealed to be largely due to a superposition of almost Gaussian distributions with standard deviations varying by flux magnitude. The crosscorrelations of CO_{2} and H_{2}O fluxes were in all cases negligible (R^{2} below 0.2, while the autocorrelation is usually below 0.6 at a lag of 0.5 h and decays rapidly at larger time lags. This implies that in these cases the weighted least squares criterion yields maximum likelihood estimates. To study the influence of the observation errors on model parameter estimates we used synthetic datasets, based on observations of two different sites. We first fitted the respective models to observations and then added the random error estimates described above and the systematic error, respectively, to the model output. This strategy enables us to compare the estimated parameters with true parameters. We illustrate that the correct implementation of the random error standard deviation scaling with flux
Model calibration and parameter estimation for environmental and water resource systems
Sun, Ne-Zheng
2015-01-01
This three-part book provides a comprehensive and systematic introduction to the development of useful models for complex systems. Part 1 covers the classical inverse problem for parameter estimation in both deterministic and statistical frameworks, Part 2 is dedicated to system identification, hyperparameter estimation, and model dimension reduction, and Part 3 considers how to collect data and construct reliable models for prediction and decision-making. For the first time, topics such as multiscale inversion, stochastic field parameterization, level set method, machine learning, global sensitivity analysis, data assimilation, model uncertainty quantification, robust design, and goal-oriented modeling, are systematically described and summarized in a single book from the perspective of model inversion, and elucidated with numerical examples from environmental and water resources modeling. Readers of this book will not only learn basic concepts and methods for simple parameter estimation, but also get famili...
Energy Technology Data Exchange (ETDEWEB)
Duan, Q; Schaake, J; Andreassian, V; Franks, S; Gupta, H V; Gusev, Y M; Habets, F; Hall, A; Hay, L; Hogue, T; Huang, M; Leavesley, G; Liang, X; Nasonova, O N; Noilhan, J; Oudin, L; Sorooshian, S; Wagener, T; Wood, E F
2005-02-10
Model Parameter Estimation Experiment (MOPEX) is an international project aimed to develop enhanced techniques for the a priori estimation of parameters in hydrologic models and in land surface parameterization schemes of atmospheric models. MOPEX science strategy involves three major steps: data preparation, a priori parameter estimation methodology development, and demonstration of parameter transferability. A comprehensive MOPEX database has been developed that contains historical hydrometeorological data and land surface characteristics data for many hydrologic basins in the United States and in other countries. This database is continuing to be expanded to include more basins in all parts of the world. A number of international MOPEX workshops have been convened to bring together interested hydrologists and land surface modelers from all over world to exchange knowledge and experience in developing a priori parameter estimation techniques. This paper describes the results from the second and third MOPEX workshops. The specific objective of those workshops is to examine the state of a priori parameter estimation techniques and how they can be potentially improved with observations from well-monitored hydrologic basins. Participants of these MOPEX workshops were given data for 12 basins in the Southeastern United States and were asked to carry out a series of numerical experiments using a priori parameters as well as calibrated parameters developed for their respective hydrologic models. Eight different models have carried all out the required numerical experiments and the results from those models have been assembled for analysis in this paper. This paper presents an overview of the MOPEX experiment design. The experimental results are analyzed and the important lessons from the two workshops are discussed. Finally, a discussion of further work and future strategy is given.
Energy Technology Data Exchange (ETDEWEB)
Mukhopadhyay, S.; Tsang, Y.; Finsterle, S.
2009-01-15
A simple conceptual model has been recently developed for analyzing pressure and temperature data from flowing fluid temperature logging (FFTL) in unsaturated fractured rock. Using this conceptual model, we developed an analytical solution for FFTL pressure response, and a semianalytical solution for FFTL temperature response. We also proposed a method for estimating fracture permeability from FFTL temperature data. The conceptual model was based on some simplifying assumptions, particularly that a single-phase airflow model was used. In this paper, we develop a more comprehensive numerical model of multiphase flow and heat transfer associated with FFTL. Using this numerical model, we perform a number of forward simulations to determine the parameters that have the strongest influence on the pressure and temperature response from FFTL. We then use the iTOUGH2 optimization code to estimate these most sensitive parameters through inverse modeling and to quantify the uncertainties associated with these estimated parameters. We conclude that FFTL can be utilized to determine permeability, porosity, and thermal conductivity of the fracture rock. Two other parameters, which are not properties of the fractured rock, have strong influence on FFTL response. These are pressure and temperature in the borehole that were at equilibrium with the fractured rock formation at the beginning of FFTL. We illustrate how these parameters can also be estimated from FFTL data.
Plumb, John M.; Moffitt, Christine M.
2015-01-01
Researchers have cautioned against the borrowing of consumption and growth parameters from other species and life stages in bioenergetics growth models. In particular, the function that dictates temperature dependence in maximum consumption (Cmax) within the Wisconsin bioenergetics model for Chinook Salmon Oncorhynchus tshawytscha produces estimates that are lower than those measured in published laboratory feeding trials. We used published and unpublished data from laboratory feeding trials with subyearling Chinook Salmon from three stocks (Snake, Nechako, and Big Qualicum rivers) to estimate and adjust the model parameters for temperature dependence in Cmax. The data included growth measures in fish ranging from 1.5 to 7.2 g that were held at temperatures from 14°C to 26°C. Parameters for temperature dependence in Cmax were estimated based on relative differences in food consumption, and bootstrapping techniques were then used to estimate the error about the parameters. We found that at temperatures between 17°C and 25°C, the current parameter values did not match the observed data, indicating that Cmax should be shifted by about 4°C relative to the current implementation under the bioenergetics model. We conclude that the adjusted parameters for Cmax should produce more accurate predictions from the bioenergetics model for subyearling Chinook Salmon.
Directory of Open Access Journals (Sweden)
Shengyu eJiang
2016-02-01
Full Text Available Likert types of rating scales in which a respondent chooses a response from an ordered set of response options are used to measure a wide variety of psychological, educational, and medical outcome variables. The most appropriate item response theory model for analyzing and scoring these instruments when they provide scores on multiple scales is the multidimensional graded response model (MGRM. A simulation study was conducted to investigate the variables that might affect item parameter recovery for the MGRM. Data were generated based on different sample sizes, test lengths, and scale intercorrelations. Parameter estimates were obtained through the flexiMIRT software. The quality of parameter recovery was assessed by the correlation between true and estimated parameters as well as bias and root- mean-square-error. Results indicated that for the vast majority of cases studied a sample size of N = 500 provided accurate parameter estimates, except for tests with 240 items when 1,000 examinees were necessary to obtain accurate parameter estimates. Increasing sample size beyond N = 1,000 did not increase the accuracy of MGRM parameter estimates.
A termination criterion for parameter estimation in stochastic models in systems biology.
Zimmer, Christoph; Sahle, Sven
2015-11-01
Parameter estimation procedures are a central aspect of modeling approaches in systems biology. They are often computationally expensive, especially when the models take stochasticity into account. Typically parameter estimation involves the iterative optimization of an objective function that describes how well the model fits some measured data with a certain set of parameter values. In order to limit the computational expenses it is therefore important to apply an adequate stopping criterion for the optimization process, so that the optimization continues at least until a reasonable fit is obtained, but not much longer. In the case of stochastic modeling, at least some parameter estimation schemes involve an objective function that is itself a random variable. This means that plain convergence tests are not a priori suitable as stopping criteria. This article suggests a termination criterion suited to optimization problems in parameter estimation arising from stochastic models in systems biology. The termination criterion is developed for optimization algorithms that involve populations of parameter sets, such as particle swarm or evolutionary algorithms. It is based on comparing the variance of the objective function over the whole population of parameter sets with the variance of repeated evaluations of the objective function at the best parameter set. The performance is demonstrated for several different algorithms. To test the termination criterion we choose polynomial test functions as well as systems biology models such as an Immigration-Death model and a bistable genetic toggle switch. The genetic toggle switch is an especially challenging test case as it shows a stochastic switching between two steady states which is qualitatively different from the model behavior in a deterministic model.
Donato, David I.
2012-01-01
This report presents the mathematical expressions and the computational techniques required to compute maximum-likelihood estimates for the parameters of the National Descriptive Model of Mercury in Fish (NDMMF), a statistical model used to predict the concentration of methylmercury in fish tissue. The expressions and techniques reported here were prepared to support the development of custom software capable of computing NDMMF parameter estimates more quickly and using less computer memory than is currently possible with available general-purpose statistical software. Computation of maximum-likelihood estimates for the NDMMF by numerical solution of a system of simultaneous equations through repeated Newton-Raphson iterations is described. This report explains the derivation of the mathematical expressions required for computational parameter estimation in sufficient detail to facilitate future derivations for any revised versions of the NDMMF that may be developed.
Estimating input parameters from intracellular recordings in the Feller neuronal model
Bibbona, Enrico; Lansky, Petr; Sirovich, Roberta
2010-03-01
We study the estimation of the input parameters in a Feller neuronal model from a trajectory of the membrane potential sampled at discrete times. These input parameters are identified with the drift and the infinitesimal variance of the underlying stochastic diffusion process with multiplicative noise. The state space of the process is restricted from below by an inaccessible boundary. Further, the model is characterized by the presence of an absorbing threshold, the first hitting of which determines the length of each trajectory and which constrains the state space from above. We compare, both in the presence and in the absence of the absorbing threshold, the efficiency of different known estimators. In addition, we propose an estimator for the drift term, which is proved to be more efficient than the others, at least in the explored range of the parameters. The presence of the threshold makes the estimates of the drift term biased, and two methods to correct it are proposed.
When the optimal is not the best: parameter estimation in complex biological models.
Directory of Open Access Journals (Sweden)
Diego Fernández Slezak
Full Text Available BACKGROUND: The vast computational resources that became available during the past decade enabled the development and simulation of increasingly complex mathematical models of cancer growth. These models typically involve many free parameters whose determination is a substantial obstacle to model development. Direct measurement of biochemical parameters in vivo is often difficult and sometimes impracticable, while fitting them under data-poor conditions may result in biologically implausible values. RESULTS: We discuss different methodological approaches to estimate parameters in complex biological models. We make use of the high computational power of the Blue Gene technology to perform an extensive study of the parameter space in a model of avascular tumor growth. We explicitly show that the landscape of the cost function used to optimize the model to the data has a very rugged surface in parameter space. This cost function has many local minima with unrealistic solutions, including the global minimum corresponding to the best fit. CONCLUSIONS: The case studied in this paper shows one example in which model parameters that optimally fit the data are not necessarily the best ones from a biological point of view. To avoid force-fitting a model to a dataset, we propose that the best model parameters should be found by choosing, among suboptimal parameters, those that match criteria other than the ones used to fit the model. We also conclude that the model, data and optimization approach form a new complex system and point to the need of a theory that addresses this problem more generally.
Bailer-Jones, C. A. L.
2010-03-01
I introduce an algorithm for estimating parameters from multidimensional data based on forward modelling. It performs an iterative local search to effectively achieve a non-linear interpolation of a template grid. In contrast to many machine-learning approaches, it avoids fitting an inverse model and the problems associated with this. The algorithm makes explicit use of the sensitivities of the data to the parameters, with the goal of better treating parameters which only have a weak impact on the data. The forward modelling approach provides uncertainty (full covariance) estimates in the predicted parameters as well as a goodness-of-fit for observations, thus providing a simple means of identifying outliers. I demonstrate the algorithm, ILIUM, with the estimation of stellar astrophysical parameters (APs) from simulations of the low-resolution spectrophotometry to be obtained by Gaia. The AP accuracy is competitive with that obtained by a support vector machine. For zero extinction stars covering a wide range of metallicity, surface gravity and temperature, ILIUM can estimate Teff to an accuracy of 0.3 per cent at G = 15 and to 4 per cent for (lower signal-to-noise ratio) spectra at G = 20, the Gaia limiting magnitude (mean absolute errors are quoted). [Fe/H] and logg can be estimated to accuracies of 0.1-0.4dex for stars with G <= 18.5, depending on the magnitude and what priors we can place on the APs. If extinction varies a priori over a wide range (0-10mag) - which will be the case with Gaia because it is an all-sky survey - then logg and [Fe/H] can still be estimated to 0.3 and 0.5dex, respectively, at G = 15, but much poorer at G = 18.5. Teff and AV can be estimated quite accurately (3-4 per cent and 0.1-0.2mag, respectively, at G = 15), but there is a strong and ubiquitous degeneracy in these parameters which limits our ability to estimate either accurately at faint magnitudes. Using the forward model, we can map these degeneracies (in advance) and thus
Estimating Parameters for the PVsyst Version 6 Photovoltaic Module Performance Model
Energy Technology Data Exchange (ETDEWEB)
Hansen, Clifford [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)
2015-10-01
We present an algorithm to determine parameters for the photovoltaic module perf ormance model encoded in the software package PVsyst(TM) version 6. Our method operates on current - voltage (I - V) measured over a range of irradiance and temperature conditions. We describe the method and illustrate its steps using data for a 36 cell crystalli ne silicon module. We qualitatively compare our method with one other technique for estimating parameters for the PVsyst(TM) version 6 model .
Parameter estimation of the Huxley cross-bridge muscle model in humans.
Vardy, Alistair N; de Vlugt, Erwin; van der Helm, Frans C T
2012-01-01
The Huxley model has the potential to provide more accurate muscle dynamics while affording a physiological interpretation at cross-bridge level. By perturbing the wrist at different velocities and initial force levels, reliable Huxley model parameters were estimated in humans in vivo using a Huxley muscle-tendon complex. We conclude that these estimates may be used to investigate and monitor changes in microscopic elements of muscle functioning from experiments at joint level.
Estimating DSGE model parameters in a small open economy: Do real-time data matter?
Directory of Open Access Journals (Sweden)
Capek Jan
2015-03-01
Full Text Available This paper investigates the differences between parameters estimated using real-time and those estimated with revised data. The models used are New Keynesian DSGE models of the Czech, Polish, Hungarian, Swiss, and Swedish small open economies in interaction with the euro area. The paper also offers an analysis of data revisions of GDP growth and inflation and trend revisions of interest rates.
State and Parameter Estimation for a Coupled Ocean--Atmosphere Model
Ghil, M.; Kondrashov, D.; Sun, C.
2006-12-01
The El-Nino/Southern-Oscillation (ENSO) dominates interannual climate variability and plays, therefore, a key role in seasonal-to-interannual prediction. Much is known by now about the main physical mechanisms that give rise to and modulate ENSO, but the values of several parameters that enter these mechanisms are an important unknown. We apply Extended Kalman Filtering (EKF) for both model state and parameter estimation in an intermediate, nonlinear, coupled ocean--atmosphere model of ENSO. The coupled model consists of an upper-ocean, reduced-gravity model of the Tropical Pacific and a steady-state atmospheric response to the sea surface temperature (SST). The model errors are assumed to be mainly in the atmospheric wind stress, and assimilated data are equatorial Pacific SSTs. Model behavior is very sensitive to two key parameters: (i) μ, the ocean-atmosphere coupling coefficient between SST and wind stress anomalies; and (ii) δs, the surface-layer coefficient. Previous work has shown that δs determines the period of the model's self-sustained oscillation, while μ measures the degree of nonlinearity. Depending on the values of these parameters, the spatio-temporal pattern of model solutions is either that of a delayed oscillator or of a westward propagating mode. Estimation of these parameters is tested first on synthetic data and allows us to recover the delayed-oscillator mode starting from model parameter values that correspond to the westward-propagating case. Assimilation of SST data from the NCEP-NCAR Reanalysis-2 shows that the parameters can vary on fairly short time scales and switch between values that approximate the two distinct modes of ENSO behavior. Rapid adjustments of these parameters occur, in particular, during strong ENSO events. Ways to apply EKF parameter estimation efficiently to state-of-the-art coupled ocean--atmosphere GCMs will be discussed.
An implementation of continuous genetic algorithm in parameter estimation of predator-prey model
Windarto
2016-03-01
Genetic algorithm is an optimization method based on the principles of genetics and natural selection in life organisms. The main components of this algorithm are chromosomes population (individuals population), parent selection, crossover to produce new offspring, and random mutation. In this paper, continuous genetic algorithm was implemented to estimate parameters in a predator-prey model of Lotka-Volterra type. For simplicity, all genetic algorithm parameters (selection rate and mutation rate) are set to be constant along implementation of the algorithm. It was found that by selecting suitable mutation rate, the algorithms can estimate these parameters well.
Estimating Parameters in Physical Models through Bayesian Inversion: A Complete Example
Allmaras, Moritz
2013-02-07
All mathematical models of real-world phenomena contain parameters that need to be estimated from measurements, either for realistic predictions or simply to understand the characteristics of the model. Bayesian statistics provides a framework for parameter estimation in which uncertainties about models and measurements are translated into uncertainties in estimates of parameters. This paper provides a simple, step-by-step example-starting from a physical experiment and going through all of the mathematics-to explain the use of Bayesian techniques for estimating the coefficients of gravity and air friction in the equations describing a falling body. In the experiment we dropped an object from a known height and recorded the free fall using a video camera. The video recording was analyzed frame by frame to obtain the distance the body had fallen as a function of time, including measures of uncertainty in our data that we describe as probability densities. We explain the decisions behind the various choices of probability distributions and relate them to observed phenomena. Our measured data are then combined with a mathematical model of a falling body to obtain probability densities on the space of parameters we seek to estimate. We interpret these results and discuss sources of errors in our estimation procedure. © 2013 Society for Industrial and Applied Mathematics.
Optimal Selection of the Sampling Interval for Estimation of Modal Parameters by an ARMA- Model
DEFF Research Database (Denmark)
Kirkegaard, Poul Henning
1993-01-01
Optimal selection of the sampling interval for estimation of the modal parameters by an ARMA-model for a white noise loaded structure modelled as a single degree of- freedom linear mechanical system is considered. An analytical solution for an optimal uniform sampling interval, which is optimal...
Budic, Lara; Didenko, Gregor; Dormann, Carsten F
2016-01-01
In species distribution analyses, environmental predictors and distribution data for large spatial extents are often available in long-lat format, such as degree raster grids. Long-lat projections suffer from unequal cell sizes, as a degree of longitude decreases in length from approximately 110 km at the equator to 0 km at the poles. Here we investigate whether long-lat and equal-area projections yield similar model parameter estimates, or result in a consistent bias. We analyzed the environmental effects on the distribution of 12 ungulate species with a northern distribution, as models for these species should display the strongest effect of projectional distortion. Additionally we choose four species with entirely continental distributions to investigate the effect of incomplete cell coverage at the coast. We expected that including model weights proportional to the actual cell area should compensate for the observed bias in model coefficients, and similarly that using land coverage of a cell should decrease bias in species with coastal distribution. As anticipated, model coefficients were different between long-lat and equal-area projections. Having progressively smaller and a higher number of cells with increasing latitude influenced the importance of parameters in models, increased the sample size for the northernmost parts of species ranges, and reduced the subcell variability of those areas. However, this bias could be largely removed by weighting long-lat cells by the area they cover, and marginally by correcting for land coverage. Overall we found little effect of using long-lat rather than equal-area projections in our analysis. The fitted relationship between environmental parameters and occurrence probability differed only very little between the two projection types. We still recommend using equal-area projections to avoid possible bias. More importantly, our results suggest that the cell area and the proportion of a cell covered by land should be
Schiavazzi, Daniele E; Baretta, Alessia; Pennati, Giancarlo; Hsia, Tain-Yen; Marsden, Alison L
2017-03-01
Computational models of cardiovascular physiology can inform clinical decision-making, providing a physically consistent framework to assess vascular pressures and flow distributions, and aiding in treatment planning. In particular, lumped parameter network (LPN) models that make an analogy to electrical circuits offer a fast and surprisingly realistic method to reproduce the circulatory physiology. The complexity of LPN models can vary significantly to account, for example, for cardiac and valve function, respiration, autoregulation, and time-dependent hemodynamics. More complex models provide insight into detailed physiological mechanisms, but their utility is maximized if one can quickly identify patient specific parameters. The clinical utility of LPN models with many parameters will be greatly enhanced by automated parameter identification, particularly if parameter tuning can match non-invasively obtained clinical data. We present a framework for automated tuning of 0D lumped model parameters to match clinical data. We demonstrate the utility of this framework through application to single ventricle pediatric patients with Norwood physiology. Through a combination of local identifiability, Bayesian estimation and maximum a posteriori simplex optimization, we show the ability to automatically determine physiologically consistent point estimates of the parameters and to quantify uncertainty induced by errors and assumptions in the collected clinical data. We show that multi-level estimation, that is, updating the parameter prior information through sub-model analysis, can lead to a significant reduction in the parameter marginal posterior variance. We first consider virtual patient conditions, with clinical targets generated through model solutions, and second application to a cohort of four single-ventricle patients with Norwood physiology. Copyright © 2016 John Wiley & Sons, Ltd.
Model reduction and parameter estimation of non-linear dynamical biochemical reaction networks.
Sun, Xiaodian; Medvedovic, Mario
2016-02-01
Parameter estimation for high dimension complex dynamic system is a hot topic. However, the current statistical model and inference approach is known as a large p small n problem. How to reduce the dimension of the dynamic model and improve the accuracy of estimation is more important. To address this question, the authors take some known parameters and structure of system as priori knowledge and incorporate it into dynamic model. At the same time, they decompose the whole dynamic model into subset network modules, based on different modules, and then they apply different estimation approaches. This technique is called Rao-Blackwellised particle filters decomposition methods. To evaluate the performance of this method, the authors apply it to synthetic data generated from repressilator model and experimental data of the JAK-STAT pathway, but this method can be easily extended to large-scale cases.
Applied parameter estimation for chemical engineers
Englezos, Peter
2000-01-01
Formulation of the parameter estimation problem; computation of parameters in linear models-linear regression; Gauss-Newton method for algebraic models; other nonlinear regression methods for algebraic models; Gauss-Newton method for ordinary differential equation (ODE) models; shortcut estimation methods for ODE models; practical guidelines for algorithm implementation; constrained parameter estimation; Gauss-Newton method for partial differential equation (PDE) models; statistical inferences; design of experiments; recursive parameter estimation; parameter estimation in nonlinear thermodynam
Energy Technology Data Exchange (ETDEWEB)
Wang, Chen-Jen; Bajpai, R.K. [Univ. of Missouri, Columbia, MO (United States)
1997-12-31
The cybernetic approach to modeling of microbial kinetics in a mixed-substrate environment has been used in this work for the fermentative production of ethanol from cheese whey. In this system, the cells grow on multiple substrates and generate metabolic energy during product formation. This article deals with the development of a mathematical model in which the concept of cell maintenance was modified in light of the specific nature of product formation. Continuous culture data for anaerobic production of ethanol by Kluyveromyces marxianus CBS 397 on glucose and lactose were used to estimate the kinetic parameters for subsequent use in predicting the behavior of microbial growth and product formation in new situations. 28 refs., 4 figs., 2 tabs.
SBMLSimulator: A Java Tool for Model Simulation and Parameter Estimation in Systems Biology
Directory of Open Access Journals (Sweden)
Alexander Dörr
2014-12-01
Full Text Available The identification of suitable model parameters for biochemical reactions has been recognized as a quite difficult endeavor. Parameter values from literature or experiments can often not directly be combined in complex reaction systems. Nature-inspired optimization techniques can find appropriate sets of parameters that calibrate a model to experimentally obtained time series data. We present SBMLsimulator, a tool that combines the Systems Biology Simulation Core Library for dynamic simulation of biochemical models with the heuristic optimization framework EvA2. SBMLsimulator provides an intuitive graphical user interface with various options as well as a fully-featured command-line interface for large-scale and script-based model simulation and calibration. In a parameter estimation study based on a published model and artificial data we demonstrate the capability of SBMLsimulator to identify parameters. SBMLsimulator is useful for both, the interactive simulation and exploration of the parameter space and for the large-scale model calibration and estimation of uncertain parameter values.
Development of regional parameter estimation equations for a macroscale hydrologic model
Abdulla, Fayez A.; Lettenmaier, Dennis P.
1997-10-01
A methodology for developing regional parameter estimation equations, designed for application to continental scale river basins, is described. The approach, which is applied to the two-layer Variable Infiltration Capacity (VIC-2L) land surface hydrologic model, uses a set of 34 unregulated calibration or "training" catchments (drainage areas 10 2-10 4 km 2) distributed throughout the Arkansas-Red River basin of the south central U.S. For each of these catchments, parameters were determined by: a) prior estimation of two of the model parameters (saturated hydraulic conductivity and pore size distribution index) from the U.S. Soil Conservation Service State Soil Geographic Data Base (STATSGO) data base; and b) estimation of the remaining seven parameters via a search procedure that minimizes the sum of squares of differences between predicted and observed streamflow. The catchment parameters were then related to 11 ancillary distributed land surface characteristics extracted from STATSGO, and 17 variables derived from station meteorological data. The seven regression equations explained from 54 to 76% of the variance of the parameters. The most frequently occurring ancillary variables were the average permeability, saturated hydraulic conductivity, and SCS hydrologic Group B (typically soils with moderately high infiltration rates) fraction derived from STATSGO, and the average temperature and standard deviation of fall precipitation. The method was tested by comparing simulations using the regional (regression equation) parameters for six unregulated catchments not in the parameter estimation set. The model performance using the regional parameters was quite good for most of the calibration and validation catchments, which were humid and semi-humid. The model did not perform as well for the smaller number of arid to semi-arid catchments.
Estimating Model Parameters of Adaptive Software Systems in Real-Time
Kumar, Dinesh; Tantawi, Asser; Zhang, Li
Adaptive software systems have the ability to adapt to changes in workload and execution environment. In order to perform resource management through model based control in such systems, an accurate mechanism for estimating the software system's model parameters is required. This paper deals with real-time estimation of a performance model for adaptive software systems that process multiple classes of transactional workload. First, insights in to the static performance model estimation problem are provided. Then an Extended Kalman Filter (EKF) design is combined with an open queueing network model to dynamically estimate the model parameters in real-time. Specific problems that are encountered in the case of multiple classes of workload are analyzed. These problems arise mainly due to the under-deterministic nature of the estimation problem. This motivates us to propose a modified design of the filter. Insights for choosing tuning parameters of the modified design, i.e., number of constraints and sampling intervals are provided. The modified filter design is shown to effectively tackle problems with multiple classes of workload through experiments.
Behmanesh, Iman; Moaveni, Babak
2016-07-01
This paper presents a Hierarchical Bayesian model updating framework to account for the effects of ambient temperature and excitation amplitude. The proposed approach is applied for model calibration, response prediction and damage identification of a footbridge under changing environmental/ambient conditions. The concrete Young's modulus of the footbridge deck is the considered updating structural parameter with its mean and variance modeled as functions of temperature and excitation amplitude. The identified modal parameters over 27 months of continuous monitoring of the footbridge are used to calibrate the updating parameters. One of the objectives of this study is to show that by increasing the levels of information in the updating process, the posterior variation of the updating structural parameter (concrete Young's modulus) is reduced. To this end, the calibration is performed at three information levels using (1) the identified modal parameters, (2) modal parameters and ambient temperatures, and (3) modal parameters, ambient temperatures, and excitation amplitudes. The calibrated model is then validated by comparing the model-predicted natural frequencies and those identified from measured data after deliberate change to the structural mass. It is shown that accounting for modeling error uncertainties is crucial for reliable response prediction, and accounting only the estimated variability of the updating structural parameter is not sufficient for accurate response predictions. Finally, the calibrated model is used for damage identification of the footbridge.
Directory of Open Access Journals (Sweden)
Riionheimo Janne
2003-01-01
Full Text Available We describe a technique for estimating control parameters for a plucked string synthesis model using a genetic algorithm. The model has been intensively used for sound synthesis of various string instruments but the fine tuning of the parameters has been carried out with a semiautomatic method that requires some hand adjustment with human listening. An automated method for extracting the parameters from recorded tones is described in this paper. The calculation of the fitness function utilizes knowledge of the properties of human hearing.
An evolutionary computing approach for parameter estimation investigation of a model for cholera.
Akman, Olcay; Schaefer, Elsa
2015-01-01
We consider the problem of using time-series data to inform a corresponding deterministic model and introduce the concept of genetic algorithms (GA) as a tool for parameter estimation, providing instructions for an implementation of the method that does not require access to special toolboxes or software. We give as an example a model for cholera, a disease for which there is much mechanistic uncertainty in the literature. We use GA to find parameter sets using available time-series data from the introduction of cholera in Haiti and we discuss the value of comparing multiple parameter sets with similar performances in describing the data.
Batstone, D J; Torrijos, M; Ruiz, C; Schmidt, J E
2004-01-01
The model structure in anaerobic digestion has been clarified following publication of the IWA Anaerobic Digestion Model No. 1 (ADM1). However, parameter values are not well known, and uncertainty and variability in the parameter values given is almost unknown. Additionally, platforms for identification of parameters, namely continuous-flow laboratory digesters, and batch tests suffer from disadvantages such as long run times, and difficulty in defining initial conditions, respectively. Anaerobic sequencing batch reactors (ASBRs) are sequenced into fill-react-settle-decant phases, and offer promising possibilities for estimation of parameters, as they are by nature, dynamic in behaviour, and allow repeatable behaviour to establish initial conditions, and evaluate parameters. In this study, we estimated parameters describing winery wastewater (most COD as ethanol) degradation using data from sequencing operation, and validated these parameters using unsequenced pulses of ethanol and acetate. The model used was the ADM1, with an extension for ethanol degradation. Parameter confidence spaces were found by non-linear, correlated analysis of the two main Monod parameters; maximum uptake rate (k(m)), and half saturation concentration (K(S)). These parameters could be estimated together using only the measured acetate concentration (20 points per cycle). From interpolating the single cycle acetate data to multiple cycles, we estimate that a practical "optimal" identifiability could be achieved after two cycles for the acetate parameters, and three cycles for the ethanol parameters. The parameters found performed well in the short term, and represented the pulses of acetate and ethanol (within 4 days of the winery-fed cycles) very well. The main discrepancy was poor prediction of pH dynamics, which could be due to an unidentified buffer with an overall influence the same as a weak base (possibly CaCO3). Based on this work, ASBR systems are effective for parameter
Institute of Scientific and Technical Information of China (English)
ZHANG Hua; WANG Yun-jia; LI Yong-feng
2009-01-01
A new mathematical model to estimate the parameters of the probability-integral method for mining subsidence prediction is proposed. Based on least squares support vector machine (LS-SVM) theory, it is capable of improving the precision and reliability of mining subsidence prediction. Many of the geological and mining factors involved are related in a nonlinear way. The new model is based on statistical theory (SLT) and empirical risk minimization (ERM) principles. Typical data collected from observation stations were used for the learning and training samples. The calculated results from the LS-SVM model were compared with the prediction results of a back propagation neural network (BPNN) model. The results show that the parameters were more precisely predicted by the LS-SVM model than by the BPNN model. The LS-SVM model was faster in computation and had better generalized performance. It provides a highly effective method for calculating the predicting parameters of the probability-integral method.
Impact of the calibration period on the conceptual rainfall-runoff model parameter estimates
Todorovic, Andrijana; Plavsic, Jasna
2015-04-01
A conceptual rainfall-runoff model is defined by its structure and parameters, which are commonly inferred through model calibration. Parameter estimates depend on objective function(s), optimisation method, and calibration period. Model calibration over different periods may result in dissimilar parameter estimates, while model efficiency decreases outside calibration period. Problem of model (parameter) transferability, which conditions reliability of hydrologic simulations, has been investigated for decades. In this paper, dependence of the parameter estimates and model performance on calibration period is analysed. The main question that is addressed is: are there any changes in optimised parameters and model efficiency that can be linked to the changes in hydrologic or meteorological variables (flow, precipitation and temperature)? Conceptual, semi-distributed HBV-light model is calibrated over five-year periods shifted by a year (sliding time windows). Length of the calibration periods is selected to enable identification of all parameters. One water year of model warm-up precedes every simulation, which starts with the beginning of a water year. The model is calibrated using the built-in GAP optimisation algorithm. The objective function used for calibration is composed of Nash-Sutcliffe coefficient for flows and logarithms of flows, and volumetric error, all of which participate in the composite objective function with approximately equal weights. Same prior parameter ranges are used in all simulations. The model is calibrated against flows observed at the Slovac stream gauge on the Kolubara River in Serbia (records from 1954 to 2013). There are no trends in precipitation nor in flows, however, there is a statistically significant increasing trend in temperatures at this catchment. Parameter variability across the calibration periods is quantified in terms of standard deviations of normalised parameters, enabling detection of the most variable parameters
Franz, K.; Hogue, T.; Barco, J.
2007-12-01
Identification of appropriate parameter sets for simulation of streamflow in ungauged basins has become a significant challenge for both operational and research hydrologists. This is especially difficult in the case of conceptual models, when model parameters typically must be "calibrated" or adjusted to match streamflow conditions in specific systems (i.e. some of the parameters are not directly observable). This paper addresses the performance and uncertainty associated with transferring conceptual rainfall-runoff model parameters between basins within large-scale ecoregions. We use the National Weather Service's (NWS) operational hydrologic model, the SACramento Soil Moisture Accounting (SAC-SMA) model. A Multi-Step Automatic Calibration Scheme (MACS), using the Shuffle Complex Evolution (SCE), is used to optimize SAC-SMA parameters for a group of watersheds with extensive hydrologic records from the Model Parameter Estimation Experiment (MOPEX) database. We then explore "hydroclimatic" relationships between basins to facilitate regionalization of parameters for an established ecoregion in the southeastern United States. The impact of regionalized parameters is evaluated via standard model performance statistics as well as through generation of hindcasts and probabilistic verification procedures to evaluate streamflow forecast skill. Preliminary results show climatology ("climate neighbor") to be a better indicator of transferability than physical similarities or proximity ("nearest neighbor"). The mean and median of all the parameters within the ecoregion are the poorest choice for the ungauged basin. The choice of regionalized parameter set affected the skill of the ensemble streamflow hindcasts, however, all parameter sets show little skill in forecasts after five weeks (i.e. climatology is as good an indicator of future streamflows). In addition, the optimum parameter set changed seasonally, with the "nearest neighbor" showing the highest skill in the
Adaptive Model Predictive Vibration Control of a Cantilever Beam with Real-Time Parameter Estimation
Directory of Open Access Journals (Sweden)
Gergely Takács
2014-01-01
Full Text Available This paper presents an adaptive-predictive vibration control system using extended Kalman filtering for the joint estimation of system states and model parameters. A fixed-free cantilever beam equipped with piezoceramic actuators serves as a test platform to validate the proposed control strategy. Deflection readings taken at the end of the beam have been used to reconstruct the position and velocity information for a second-order state-space model. In addition to the states, the dynamic system has been augmented by the unknown model parameters: stiffness, damping constant, and a voltage/force conversion constant, characterizing the actuating effect of the piezoceramic transducers. The states and parameters of this augmented system have been estimated in real time, using the hybrid extended Kalman filter. The estimated model parameters have been applied to define the continuous state-space model of the vibrating system, which in turn is discretized for the predictive controller. The model predictive control algorithm generates state predictions and dual-mode quadratic cost prediction matrices based on the updated discrete state-space models. The resulting cost function is then minimized using quadratic programming to find the sequence of optimal but constrained control inputs. The proposed active vibration control system is implemented and evaluated experimentally to investigate the viability of the control method.
Rosenblatt, Marcus; Timmer, Jens; Kaschek, Daniel
2016-01-01
Ordinary differential equation models have become a wide-spread approach to analyze dynamical systems and understand underlying mechanisms. Model parameters are often unknown and have to be estimated from experimental data, e.g., by maximum-likelihood estimation. In particular, models of biological systems contain a large number of parameters. To reduce the dimensionality of the parameter space, steady-state information is incorporated in the parameter estimation process. For non-linear models, analytical steady-state calculation typically leads to higher-order polynomial equations for which no closed-form solutions can be obtained. This can be circumvented by solving the steady-state equations for kinetic parameters, which results in a linear equation system with comparatively simple solutions. At the same time multiplicity of steady-state solutions is avoided, which otherwise is problematic for optimization. When solved for kinetic parameters, however, steady-state constraints tend to become negative for particular model specifications, thus, generating new types of optimization problems. Here, we present an algorithm based on graph theory that derives non-negative, analytical steady-state expressions by stepwise removal of cyclic dependencies between dynamical variables. The algorithm avoids multiple steady-state solutions by construction. We show that our method is applicable to most common classes of biochemical reaction networks containing inhibition terms, mass-action and Hill-type kinetic equations. Comparing the performance of parameter estimation for different analytical and numerical methods of incorporating steady-state information, we show that our approach is especially well-tailored to guarantee a high success rate of optimization.
Markov Chain Monte Carlo Estimation of Item Parameters for the Generalized Graded Unfolding Model
de la Torre, Jimmy; Stark, Stephen; Chernyshenko, Oleksandr S.
2006-01-01
The authors present a Markov Chain Monte Carlo (MCMC) parameter estimation procedure for the generalized graded unfolding model (GGUM) and compare it to the marginal maximum likelihood (MML) approach implemented in the GGUM2000 computer program, using simulated and real personality data. In the simulation study, test length, number of response…
Revisiting the 4-Parameter Item Response Model: Bayesian Estimation and Application.
Culpepper, Steven Andrew
2016-12-01
There has been renewed interest in Barton and Lord's (An upper asymptote for the three-parameter logistic item response model (Tech. Rep. No. 80-20). Educational Testing Service, 1981) four-parameter item response model. This paper presents a Bayesian formulation that extends Béguin and Glas (MCMC estimation and some model fit analysis of multidimensional IRT models. Psychometrika, 66 (4):541-561, 2001) and proposes a model for the four-parameter normal ogive (4PNO) model. Monte Carlo evidence is presented concerning the accuracy of parameter recovery. The simulation results support the use of less informative uniform priors for the lower and upper asymptotes, which is an advantage to prior research. Monte Carlo results provide some support for using the deviance information criterion and [Formula: see text] index to choose among models with two, three, and four parameters. The 4PNO is applied to 7491 adolescents' responses to a bullying scale collected under the 2005-2006 Health Behavior in School-Aged Children study. The results support the value of the 4PNO to estimate lower and upper asymptotes in large-scale surveys.
Global parameter estimation of the Cochlodinium polykrikoides model using bioassay data
Institute of Scientific and Technical Information of China (English)
CHO Hong-Yeon; PARK Kwang-Soon; KIM Sung
2016-01-01
Cochlodinium polykrikoides is a notoriously harmful algal species that inflicts severe damage on the aquacultures of the coastal seas of Korea and Japan. Information on their expected movement tracks and boundaries of influence is very useful and important for the effective establishment of a reduction plan. In general, the information is supported by a red-tide (a.k.a algal bloom) model. The performance of the model is highly dependent on the accuracy of parameters, which are the coefficients of functions approximating the biological growth and loss patterns of theC. polykrikoides. These parameters have been estimated using the bioassay data composed of growth-limiting factor and net growth rate value pairs. In the case of theC. polykrikoides, the parameters are different from each other in accordance with the used data because the bioassay data are sufficient compared to the other algal species. The parameters estimated by one specific dataset can be viewed as locally-optimized because they are adjusted only by that dataset. In cases where the other one data set is used, the estimation error might be considerable. In this study, the parameters are estimated by all available data sets without the use of only one specific data set and thus can be considered globally optimized. The cost function for the optimization is defined as the integrated mean squared estimation error, i.e., the difference between the values of the experimental and estimated rates. Based on quantitative error analysis, the root-mean squared errors of the global parameters show smaller values, approximately 25%–50%, than the values of the local parameters. In addition, bias is removed completely in the case of the globally estimated parameters. The parameter sets can be used as the reference default values of a red-tide model because they are optimal and representative. However, additional tuning of the parameters using thein-situ monitoring data is highly required. As opposed to the bioassay
Parameter Estimation for Traffic Noise Models Using a Harmony Search Algorithm
Directory of Open Access Journals (Sweden)
Deok-Soon An
2013-01-01
Full Text Available A technique has been developed for predicting road traffic noise for environmental assessment, taking into account traffic volume as well as road surface conditions. The ASJ model (ASJ Prediction Model for Road Traffic Noise, 1999, which is based on the sound power level of the noise emitted by the interaction between the road surface and tires, employs regression models for two road surface types: dense-graded asphalt (DGA and permeable asphalt (PA. However, these models are not applicable to other types of road surfaces. Accordingly, this paper introduces a parameter estimation procedure for ASJ-based noise prediction models, utilizing a harmony search (HS algorithm. Traffic noise measurement data for four different vehicle types were used in the algorithm to determine the regression parameters for several road surface types. The parameters of the traffic noise prediction models were evaluated using another measurement set, and good agreement was observed between the predicted and measured sound power levels.
Widyaningsih, Purnami; Retno Sari Saputro, Dewi; Nugrahani Putri, Aulia
2017-06-01
GWOLR model combines geographically weighted regression (GWR) and (ordinal logistic reression) OLR models. Its parameter estimation employs maximum likelihood estimation. Such parameter estimation, however, yields difficult-to-solve system of nonlinear equations, and therefore numerical approximation approach is required. The iterative approximation approach, in general, uses Newton-Raphson (NR) method. The NR method has a disadvantage—its Hessian matrix is always the second derivatives of each iteration so it does not always produce converging results. With regard to this matter, NR model is modified by substituting its Hessian matrix into Fisher information matrix, which is termed Fisher scoring (FS). The present research seeks to determine GWOLR model parameter estimation using Fisher scoring method and apply the estimation on data of the level of vulnerability to Dengue Hemorrhagic Fever (DHF) in Semarang. The research concludes that health facilities give the greatest contribution to the probability of the number of DHF sufferers in both villages. Based on the number of the sufferers, IR category of DHF in both villages can be determined.
The limiting behavior of the estimated parameters in a misspecified random field regression model
DEFF Research Database (Denmark)
Dahl, Christian Møller; Qin, Yu
convenient new uniform convergence results that we propose. This theory may have applications beyond those presented here. Our results indicate that classical statistical inference techniques, in general, works very well for random field regression models in finite samples and that these models succesfully......This paper examines the limiting properties of the estimated parameters in the random field regression model recently proposed by Hamilton (Econometrica, 2001). Though the model is parametric, it enjoys the flexibility of the nonparametric approach since it can approximate a large collection...... of nonlinear functions and it has the added advantage that there is no "curse of dimensionality."Contrary to existing literature on the asymptotic properties of the estimated parameters in random field models our results do not require that the explanatory variables are sampled on a grid. However...
Cuch, Daniel A; Hasi, Claudio D El
2015-01-01
In this work we are concerned with the inverse problem of the estimation of modeling parameters for a reactive bimolecular transport based on experimental data that is non-uniformly distributed along the interval where the process takes place. We proposed a methodology that can help to determine the intervals where most of the data should be taken in order to obtain a good estimation of the parameters. For the purpose of reducing the cost of laboratory experiments, we propose to simulate data where is needed and it is not available, a PreProcesing Data Fitting (PPDF).We applied this strategy on the estimation of parameters for an advection-diffusion-reaction problem in a porous media. Each step is explained in detail and simulation results are shown and compared with previous ones.
Reimer, Joscha; Piwonski, Jaroslaw; Slawig, Thomas
2016-04-01
The statistical significance of any model-data comparison strongly depends on the quality of the used data and the criterion used to measure the model-to-data misfit. The statistical properties (such as mean values, variances and covariances) of the data should be taken into account by choosing a criterion as, e.g., ordinary, weighted or generalized least squares. Moreover, the criterion can be restricted onto regions or model quantities which are of special interest. This choice influences the quality of the model output (also for not measured quantities) and the results of a parameter estimation or optimization process. We have estimated the parameters of a three-dimensional and time-dependent marine biogeochemical model describing the phosphorus cycle in the ocean. For this purpose, we have developed a statistical model for measurements of phosphate and dissolved organic phosphorus. This statistical model includes variances and correlations varying with time and location of the measurements. We compared the obtained estimations of model output and parameters for different criteria. Another question is if (and which) further measurements would increase the model's quality at all. Using experimental design criteria, the information content of measurements can be quantified. This may refer to the uncertainty in unknown model parameters as well as the uncertainty regarding which model is closer to reality. By (another) optimization, optimal measurement properties such as locations, time instants and quantities to be measured can be identified. We have optimized such properties for additional measurement for the parameter estimation of the marine biogeochemical model. For this purpose, we have quantified the uncertainty in the optimal model parameters and the model output itself regarding the uncertainty in the measurement data using the (Fisher) information matrix. Furthermore, we have calculated the uncertainty reduction by additional measurements depending on time
Estimation of k-ε parameters using surrogate models and jet-in-crossflow data
Energy Technology Data Exchange (ETDEWEB)
Lefantzi, Sophia [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Ray, Jaideep [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Arunajatesan, Srinivasan [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Dechant, Lawrence [Sandia National Lab. (SNL-CA), Livermore, CA (United States)
2014-11-01
We demonstrate a Bayesian method that can be used to calibrate computationally expensive 3D RANS (Reynolds Av- eraged Navier Stokes) models with complex response surfaces. Such calibrations, conditioned on experimental data, can yield turbulence model parameters as probability density functions (PDF), concisely capturing the uncertainty in the parameter estimates. Methods such as Markov chain Monte Carlo (MCMC) estimate the PDF by sampling, with each sample requiring a run of the RANS model. Consequently a quick-running surrogate is used instead to the RANS simulator. The surrogate can be very difficult to design if the model's response i.e., the dependence of the calibration variable (the observable) on the parameter being estimated is complex. We show how the training data used to construct the surrogate can be employed to isolate a promising and physically realistic part of the parameter space, within which the response is well-behaved and easily modeled. We design a classifier, based on treed linear models, to model the "well-behaved region". This classifier serves as a prior in a Bayesian calibration study aimed at estimating 3 k - ε parameters ( C _{μ}, C _{ε2} , C _{ε1} ) from experimental data of a transonic jet-in-crossflow interaction. The robustness of the calibration is investigated by checking its predictions of variables not included in the cal- ibration data. We also check the limit of applicability of the calibration by testing at off-calibration flow regimes. We find that calibration yield turbulence model parameters which predict the flowfield far better than when the nomi- nal values of the parameters are used. Substantial improvements are still obtained when we use the calibrated RANS model to predict jet-in-crossflow at Mach numbers and jet strengths quite different from those used to generate the ex- perimental (calibration) data. Thus the primary reason for poor predictive skill of RANS, when using nominal
Estimation of k-e parameters using surrogate models and jet-in-crossflow data.
Energy Technology Data Exchange (ETDEWEB)
Lefantzi, Sophia; Ray, Jaideep; Arunajatesan, Srinivasan; Dechant, Lawrence
2015-02-01
We demonstrate a Bayesian method that can be used to calibrate computationally expensive 3D RANS (Reynolds Av- eraged Navier Stokes) models with complex response surfaces. Such calibrations, conditioned on experimental data, can yield turbulence model parameters as probability density functions (PDF), concisely capturing the uncertainty in the parameter estimates. Methods such as Markov chain Monte Carlo (MCMC) estimate the PDF by sampling, with each sample requiring a run of the RANS model. Consequently a quick-running surrogate is used instead to the RANS simulator. The surrogate can be very difficult to design if the model's response i.e., the dependence of the calibration variable (the observable) on the parameter being estimated is complex. We show how the training data used to construct the surrogate can be employed to isolate a promising and physically realistic part of the parameter space, within which the response is well-behaved and easily modeled. We design a classifier, based on treed linear models, to model the "well-behaved region". This classifier serves as a prior in a Bayesian calibration study aimed at estimating 3 k [?] e parameters ( C u , C e 2 , C e 1 ) from experimental data of a transonic jet-in-crossflow interaction. The robustness of the calibration is investigated by checking its predictions of variables not included in the cal- ibration data. We also check the limit of applicability of the calibration by testing at off-calibration flow regimes. We find that calibration yield turbulence model parameters which predict the flowfield far better than when the nomi- nal values of the parameters are used. Substantial improvements are still obtained when we use the calibrated RANS model to predict jet-in-crossflow at Mach numbers and jet strengths quite different from those used to generate the ex- perimental (calibration) data. Thus the primary reason for poor predictive skill of RANS, when using nominal values of the turbulence model
Welter, David E.; Doherty, John E.; Hunt, Randall J.; Muffels, Christopher T.; Tonkin, Matthew J.; Schreuder, Willem A.
2012-01-01
An object-oriented parameter estimation code was developed to incorporate benefits of object-oriented programming techniques for solving large parameter estimation modeling problems. The code is written in C++ and is a formulation and expansion of the algorithms included in PEST, a widely used parameter estimation code written in Fortran. The new code is called PEST++ and is designed to lower the barriers of entry for users and developers while providing efficient algorithms that can accommodate large, highly parameterized problems. This effort has focused on (1) implementing the most popular features of PEST in a fashion that is easy for novice or experienced modelers to use and (2) creating a software design that is easy to extend; that is, this effort provides a documented object-oriented framework designed from the ground up to be modular and extensible. In addition, all PEST++ source code and its associated libraries, as well as the general run manager source code, have been integrated in the Microsoft Visual Studio® 2010 integrated development environment. The PEST++ code is designed to provide a foundation for an open-source development environment capable of producing robust and efficient parameter estimation tools for the environmental modeling community into the future.
Parameter estimation in food science.
Dolan, Kirk D; Mishra, Dharmendra K
2013-01-01
Modeling includes two distinct parts, the forward problem and the inverse problem. The forward problem-computing y(t) given known parameters-has received much attention, especially with the explosion of commercial simulation software. What is rarely made clear is that the forward results can be no better than the accuracy of the parameters. Therefore, the inverse problem-estimation of parameters given measured y(t)-is at least as important as the forward problem. However, in the food science literature there has been little attention paid to the accuracy of parameters. The purpose of this article is to summarize the state of the art of parameter estimation in food science, to review some of the common food science models used for parameter estimation (for microbial inactivation and growth, thermal properties, and kinetics), and to suggest a generic method to standardize parameter estimation, thereby making research results more useful. Scaled sensitivity coefficients are introduced and shown to be important in parameter identifiability. Sequential estimation and optimal experimental design are also reviewed as powerful parameter estimation methods that are beginning to be used in the food science literature.
Centrifuge modeling of one-step outflow tests for unsaturated parameter estimations
Directory of Open Access Journals (Sweden)
H. Nakajima
2006-05-01
Full Text Available Centrifuge modeling of one-step outflow tests were carried out using a 2-m radius geotechnical centrifuge, and the cumulative outflow and transient pore pressure were measured during the tests at multiple gravity levels. Based on the scaling law of centrifuge modeling, the measurements generally showed reasonable agreement with prototype data calculated from forward simulations with input parameters determined from standard laboratory tests. The parameter optimizations were examined for three different combinations of input data sets using the test measurements. Within the gravity level examined in this study up to 40 g, the optimized unsaturated parameters compared well when accurate pore pressure measurements were included along with cumulative outflow as input data. The centrifuge modeling technique with its capability to implement variety of instrumentations under well controlled initial and boundary conditions, shortens testing time and can provide significant information for the parameter estimation procedure.
Centrifuge modeling of one-step outflow tests for unsaturated parameter estimations
Directory of Open Access Journals (Sweden)
H. Nakajima
2006-01-01
Full Text Available Centrifuge modeling of one-step outflow tests were carried out using a 2-m radius geotechnical centrifuge, and the cumulative outflow and transient pore water pressure were measured during the tests at multiple gravity levels. Based on the scaling laws of centrifuge modeling, the measurements generally showed reasonable agreement with prototype data calculated from forward simulations with input parameters determined from standard laboratory tests. The parameter optimizations were examined for three different combinations of input data sets using the test measurements. Within the gravity level examined in this study up to 40g, the optimized unsaturated parameters compared well when accurate pore water pressure measurements were included along with cumulative outflow as input data. With its capability to implement variety of instrumentations under well controlled initial and boundary conditions and to shorten testing time, the centrifuge modeling technique is attractive as an alternative experimental method that provides more freedom to set inverse problem conditions for the parameter estimation.
Finch, Holmes; Edwards, Julianne M.
2016-01-01
Standard approaches for estimating item response theory (IRT) model parameters generally work under the assumption that the latent trait being measured by a set of items follows the normal distribution. Estimation of IRT parameters in the presence of nonnormal latent traits has been shown to generate biased person and item parameter estimates. A…
A novel parameter estimation method for metal oxide surge arrester models
Indian Academy of Sciences (India)
Mehdi Nafar; Gevork B Gharehpetian; Taher Niknam
2011-12-01
Accurate modelling and exact determination of Metal Oxide (MO) surge arrester parameters are very important for arrester allocation, insulation coordination studies and systems reliability calculations. In this paper, a new technique, which is the combination of Adaptive Particle Swarm Optimization (APSO) and Ant Colony Optimization (ACO) algorithms and linking the MATLAB and EMTP, is proposed to estimate the parameters of MO surge arrester models. The proposed algorithm is named Modiﬁed Adaptive Particle Swarm Optimization (MAPSO). In the proposed algorithm, to overcome the drawback of the PSO algorithm (convergence to local optima), the inertia weight is tuned by using fuzzy rules and the cognitive and the social parameters are self-adaptively adjusted. Also, to improve the global search capability and prevent the convergence to local minima, ACO algorithm is combined to the proposed APSO algorithm. The transient models of MO surge arrester have been simulated by using ATP-EMTP. The results of simulations have been applied to the program, which is based on MAPSO algorithm and can determine the ﬁtness and parameters of different models. The validity and the accuracy of estimated parameters of surge arrester models are assessed by comparing the predicted residual voltage with experimental results.
Structural observability analysis and EKF based parameter estimation of building heating models
Directory of Open Access Journals (Sweden)
D.W.U. Perera
2016-07-01
Full Text Available Research for enhanced energy-efficient buildings has been given much recognition in the recent years owing to their high energy consumptions. Increasing energy needs can be precisely controlled by practicing advanced controllers for building Heating, Ventilation, and Air-Conditioning (HVAC systems. Advanced controllers require a mathematical building heating model to operate, and these models need to be accurate and computationally efficient. One main concern associated with such models is the accurate estimation of the unknown model parameters. This paper presents the feasibility of implementing a simplified building heating model and the computation of physical parameters using an off-line approach. Structural observability analysis is conducted using graph-theoretic techniques to analyze the observability of the developed system model. Then Extended Kalman Filter (EKF algorithm is utilized for parameter estimates using the real measurements of a single-zone building. The simulation-based results confirm that even with a simple model, the EKF follows the state variables accurately. The predicted parameters vary depending on the inputs and disturbances.
A Consistent Direct Method for Estimating Parameters in Ordinary Differential Equations Models
Holte, Sarah E.
2016-01-01
Ordinary differential equations provide an attractive framework for modeling temporal dynamics in a variety of scientific settings. We show how consistent estimation for parameters in ODE models can be obtained by modifying a direct (non-iterative) least squares method similar to the direct methods originally developed by Himmelbau, Jones and Bischoff. Our method is called the bias-corrected least squares (BCLS) method since it is a modification of least squares methods known to be biased. Co...
Khaled MAMMAR; CHAKER, Abdelkader
2013-01-01
In this paper, a new approach based on Experimental of design methodology (DoE) is used to estimate the optimal of unknown model parameters proton exchange membrane fuel cell (PEMFC). This proposed approach combines the central composite face-centered (CCF) and numerical PEMFC electrochemical. Simulation results obtained using electrochemical model help to predict the cell voltage in terms of inlet partial pressures of hydrogen and oxygen, stack temperature, and operating current. The value o...
Parameter estimation in truss beams using Timoshenko beam model with damping
Sun, C. T.; Juang, J. N.
1983-01-01
Truss beams with members having viscous damping are modeled with a Timoshenko beam. Procedures for deriving the equivalent bending rigidity, transverse shear rigidity, and damping are presented. Explicit expressions for these equivalent beam properties are obtained for a specific truss beam. The beam model thus established is then used to investigate the effect of damping in free vibration. Finally, the beam is employed in the estimation of structural parameters in a simply-supported truss beam using a random search algorithm.
Parameter Estimation of a Delay Time Model of Wearing Parts Based on Objective Data
Directory of Open Access Journals (Sweden)
Y. Tang
2015-01-01
Full Text Available The wearing parts of a system have a very high failure frequency, making it necessary to carry out continual functional inspections and maintenance to protect the system from unscheduled downtime. This allows for the collection of a large amount of maintenance data. Taking the unique characteristics of the wearing parts into consideration, we establish their respective delay time models in ideal inspection cases and nonideal inspection cases. The model parameters are estimated entirely using the collected maintenance data. Then, a likelihood function of all renewal events is derived based on their occurring probability functions, and the model parameters are calculated with the maximum likelihood function method, which is solved by the CRM. Finally, using two wearing parts from the oil and gas drilling industry as examples—the filter element and the blowout preventer rubber core—the parameters of the distribution function of the initial failure time and the delay time for each example are estimated, and their distribution functions are obtained. Such parameter estimation based on objective data will contribute to the optimization of the reasonable function inspection interval and will also provide some theoretical models to support the integrity management of equipment or systems.
Fast Parameters Estimation in Medication Efficacy Assessment Model for Heart Failure Treatment
Directory of Open Access Journals (Sweden)
Yinzi Ren
2012-01-01
Full Text Available Introduction. Heart failure (HF is a common and potentially fatal condition. Cardiovascular research has focused on medical therapy for HF. Theoretical modelling could enable simulation and evaluation of the effectiveness of medications. Furthermore, the models could also help predict patients’ cardiac response to the treatment which will be valuable for clinical decision-making. Methods. This study presents a fast parameters estimation algorithm for constructing a cardiovascular model for medicine evaluation. The outcome of HF treatment is assessed by hemodynamic parameters and a comprehensive index furnished by the model. Angiotensin-converting enzyme inhibitors (ACEIs were used as a model drug in this study. Results. Our simulation results showed different treatment responses to enalapril and lisinopril, which are both ACEI drugs. A dose-effect was also observed in the model simulation. Conclusions. Our results agreed well with the findings from clinical trials and previous literature, suggesting the validity of the model.
Institute of Scientific and Technical Information of China (English)
无
2003-01-01
In this paper a stochastic volatility model is considered. That is, a log price process Y whichis given in terms of a volatility process V is studied. The latter is defined such that the logprice possesses some of the properties empirically observed by Barndorff-Nielsen & Jiang[6]. Inthe model there are two sets of unknown parameters, one set corresponding to the marginaldistribution of V and one to autocorrelation of V. Based on discrete time observations ofthe log price the authors discuss how to estimate the parameters appearing in the marginaldistribution and find the asymptotic properties.
Parameter estimation of Black-Sholes-Merton model by random observation
Vladeva, Dimitrinka I.
2015-11-01
Stochastic processes are frequently used to model various scientific problems in fields ranging from finance and biology to engineering and physical science. In this paper we consider the Black-Sholes-Merton model with constant coefficients and find the unbiased and consistent estimators for the unknown parameters when the observations are point process t0, t1, …, tn independent of the Wiener process. Here we prove good properties of the estimations without any condition of the type max 1 ≤i ≤n (ti-ti -1)→0 when n → ∞, as in other sampling schemes.
Parameters estimation using the first passage times method in a jump-diffusion model
Khaldi, K.; Meddahi, S.
2016-06-01
The main purposes of this paper are two contributions: (1) it presents a new method, which is the first passage time (FPT method) generalized for all passage times (GPT method), in order to estimate the parameters of stochastic Jump-Diffusion process. (2) it compares in a time series model, share price of gold, the empirical results of the estimation and forecasts obtained with the GPT method and those obtained by the moments method and the FPT method applied to the Merton Jump-Diffusion (MJD) model.
Effect of assay measurement error on parameter estimation in concentration-QTc interval modeling.
Bonate, Peter L
2013-01-01
Linear mixed-effects models (LMEMs) of concentration-double-delta QTc intervals (QTc intervals corrected for placebo and baseline effects) assume that the concentration measurement error is negligible, which is an incorrect assumption. Previous studies have shown in linear models that independent variable error can attenuate the slope estimate with a corresponding increase in the intercept. Monte Carlo simulation was used to examine the impact of assay measurement error (AME) on the parameter estimates of an LMEM and nonlinear MEM (NMEM) concentration-ddQTc interval model from a 'typical' thorough QT study. For the LMEM, the type I error rate was unaffected by assay measurement error. Significant slope attenuation ( > 10%) occurred when the AME exceeded > 40% independent of the sample size. Increasing AME also decreased the between-subject variance of the slope, increased the residual variance, and had no effect on the between-subject variance of the intercept. For a typical analytical assay having an assay measurement error of less than 15%, the relative bias in the estimates of the model parameters and variance components was less than 15% in all cases. The NMEM appeared to be more robust to AME error as most parameters were unaffected by measurement error. Monte Carlo simulation was then used to determine whether the simulation-extrapolation method of parameter bias correction could be applied to cases of large AME in LMEMs. For analytical assays with large AME ( > 30%), the simulation-extrapolation method could correct biased model parameter estimates to near-unbiased levels.
Directory of Open Access Journals (Sweden)
Khaled MAMMAR
2013-11-01
Full Text Available In this paper, a new approach based on Experimental of design methodology (DoE is used to estimate the optimal of unknown model parameters proton exchange membrane fuel cell (PEMFC. This proposed approach combines the central composite face-centered (CCF and numerical PEMFC electrochemical. Simulation results obtained using electrochemical model help to predict the cell voltage in terms of inlet partial pressures of hydrogen and oxygen, stack temperature, and operating current. The value of the previous model and (CCF design methodology is used for parametric analysis of electrochemical model. Thus it is possible to evaluate the relative importance of each parameter to the simulation accuracy. However this methodology is able to define the exact values of the parameters from the manufacture data. It was tested for the BCS 500-W stack PEM Generator, a stack rated at 500 W, manufactured by American Company BCS Technologies FC.
Hydrological model performance and parameter estimation in the wavelet-domain
Directory of Open Access Journals (Sweden)
B. Schaefli
2009-10-01
Full Text Available This paper proposes a method for rainfall-runoff model calibration and performance analysis in the wavelet-domain by fitting the estimated wavelet-power spectrum (a representation of the time-varying frequency content of a time series of a simulated discharge series to the one of the corresponding observed time series. As discussed in this paper, calibrating hydrological models so as to reproduce the time-varying frequency content of the observed signal can lead to different results than parameter estimation in the time-domain. Therefore, wavelet-domain parameter estimation has the potential to give new insights into model performance and to reveal model structural deficiencies. We apply the proposed method to synthetic case studies and a real-world discharge modeling case study and discuss how model diagnosis can benefit from an analysis in the wavelet-domain. The results show that for the real-world case study of precipitation – runoff modeling for a high alpine catchment, the calibrated discharge simulation captures the dynamics of the observed time series better than the results obtained through calibration in the time-domain. In addition, the wavelet-domain performance assessment of this case study highlights the frequencies that are not well reproduced by the model, which gives specific indications about how to improve the model structure.
Matzelle, A.; Montalto, V.; Sarà, G.; Zippay, M.; Helmuth, B.
2014-11-01
Dynamic Energy Budget (DEB) models serve as a powerful tool for describing the flow of energy through organisms from assimilation of food to utilization for maintenance, growth and reproduction. The DEB theory has been successfully applied to several bivalve species to compare bioenergetic and physiological strategies for the utilization of energy. In particular, mussels within the Mytilus edulis complex (M. edulis, M. galloprovincialis, and M. trossulus) have been the focus of many studies due to their economic and ecological importance, and their worldwide distribution. However, DEB parameter values have never been estimated for Mytilus californianus, a species that is an ecological dominant on rocky intertidal shores on the west coast of North America and which likely varies considerably from mussels in the M. edulis complex in its physiology. We estimated a set of DEB parameters for M. californianus using the covariation method estimation procedure and compared these to parameter values from other bivalve species. Model parameters were used to compare sensitivity to environmental variability among species, as a first examination of how strategies for physiologically contending with environmental change by M. californianus may differ from those of other bivalves. Results suggest that based on the parameter set obtained, M. californianus has favorable energetic strategies enabling it to contend with a range of environmental conditions. For instance, the allocation fraction of reserve to soma (κ) is among the highest of any bivalves, which is consistent with the observation that this species can survive over a wide range of environmental conditions, including prolonged periods of starvation.
An iterative stochastic ensemble method for parameter estimation of subsurface flow models
Elsheikh, Ahmed H.
2013-06-01
Parameter estimation for subsurface flow models is an essential step for maximizing the value of numerical simulations for future prediction and the development of effective control strategies. We propose the iterative stochastic ensemble method (ISEM) as a general method for parameter estimation based on stochastic estimation of gradients using an ensemble of directional derivatives. ISEM eliminates the need for adjoint coding and deals with the numerical simulator as a blackbox. The proposed method employs directional derivatives within a Gauss-Newton iteration. The update equation in ISEM resembles the update step in ensemble Kalman filter, however the inverse of the output covariance matrix in ISEM is regularized using standard truncated singular value decomposition or Tikhonov regularization. We also investigate the performance of a set of shrinkage based covariance estimators within ISEM. The proposed method is successfully applied on several nonlinear parameter estimation problems for subsurface flow models. The efficiency of the proposed algorithm is demonstrated by the small size of utilized ensembles and in terms of error convergence rates. © 2013 Elsevier Inc.
Santaren, D.; Peylin, P.; Viovy, N.; Ciais, P.
2003-04-01
Global model of Carbone, water, and energy exchanges between the biosphere and the atmosphere are usually validated and calibrated with intensive measurement made over specific ecosystem like those of the fluxnet networks.However the nonlinear dependance between fluxes and model parameters generally complicate the optimization of the major parameters.In this study, we estimate few key parameters of the ORCHIDEE french model,using diurnal variation measurements of latent heat,sensible heat and net CO2 fluxes for 3 weeks over pine forest (Landes, France).The model is forced with the observed climatic forcing: Temperature, income solar radiations,wind velocity norm, air humidity, pressure and precipitations. We will first present the inverse methodology and the problem linkedto the non linearity. The result of the optimization shows correlations within the initial ensemble of parameters which allow us to choose only five parameters determined independently from the observations. Directly related to the net CO2 flux, the maximum rate of carboxylation,Vcmax,and the stomatal conductance, gs, are significantly changed from their apriori estimate for that period. The aerodynamic resistance, the albedo and a parameter linked to maintenance respiration were also modified within their physical range.Overall the model fit to the data was largely improved. Note however that some discrepancies remain for sensible heat flux which would probably require some model improvements for the stocking of energy in the soil. Such work is currently extended in time to account for parameter variations between the season. The application to other ecosystems and with the supplementary data of the Leaf Area Index will be also discussed.
Estimation of friction parameters in gravity currents by data assimilation in a model hierarchy
Directory of Open Access Journals (Sweden)
A. Wirth
2011-01-01
Full Text Available This paper is the last in a series of three investigating the friction laws and their parametrisation in idealised gravity currents in a rotating frame. Results on the dynamics of a gravity current (Wirth, 2009 and on the estimation of friction laws by data assimilation (Wirth and Verron, 2008 are combined to estimate the friction parameters and discriminate between friction laws in non-hydrostatic numerical simulations of gravity current dynamics, using data assimilation and a reduced gravity shallow water model.
I demonstrate, that friction parameters and laws in gravity currents can be estimated using data assimilation. The results clearly show that friction follows a linear Rayleigh law for small Reynolds numbers and the estimated value agrees well with the analytical value obtained for non-accelerating Ekman layers. A significant and sudden departure towards a quadratic drag law at an Ekman layer based Reynolds number of around 800 is shown, in agreement with classical laboratory experiments. The drag coefficient obtained compare well to friction values over smooth surfaces. I show that data assimilation can be used to determine friction parameters and discriminate between friction laws and that it is a powerful tool in systematically connection models within a model hierarchy.
Estimation of friction parameters in gravity currents by data assimilation in a model hierarchy
Directory of Open Access Journals (Sweden)
A. Wirth
2011-04-01
Full Text Available This paper is the last in a series of three investigating the friction laws and their parametrisation in idealised gravity currents in a rotating frame. Results on the dynamics of a gravity current (Wirth, 2009 and on the estimation of friction laws by data assimilation (Wirth and Verron, 2008 are combined to estimate the friction parameters and discriminate between friction laws in non-hydrostatic numerical simulations of gravity current dynamics, using data assimilation and a reduced gravity shallow water model.
I demonstrate, that friction parameters and laws in gravity currents can be estimated using data assimilation. The results clearly show that friction follows a linear Rayleigh law for small Reynolds numbers and the estimated value agrees well with the analytical value obtained for non-accelerating Ekman layers. A significant and sudden departure towards a quadratic drag law at an Ekman layer based Reynolds number of around 800 is shown, in agreement with classical laboratory experiments. The drag coefficient obtained compares well to friction values over smooth surfaces. I show that data assimilation can be used to determine friction parameters and discriminate between friction laws and that it is a powerful tool in systematically connecting models within a model hierarchy.
On the least-square estimation of parameters for statistical diffusion weighted imaging model.
Yuan, Jing; Zhang, Qinwei
2013-01-01
Statistical model for diffusion-weighted imaging (DWI) has been proposed for better tissue characterization by introducing a distribution function for apparent diffusion coefficients (ADC) to account for the restrictions and hindrances to water diffusion in biological tissues. This paper studies the precision and uncertainty in the estimation of parameters for statistical DWI model with Gaussian distribution, i.e. the position of distribution maxima (Dm) and the distribution width (σ), by using non-linear least-square (NLLS) fitting. Numerical simulation shows that precise parameter estimation, particularly for σ, imposes critical requirements on the extremely high signal-to-noise ratio (SNR) of DWI signal when NLLS fitting is used. Unfortunately, such extremely high SNR may be difficult to achieve for the normal setting of clinical DWI scan. For Dm and σ parameter mapping of in vivo human brain, multiple local minima are found and result in large uncertainties in the estimation of distribution width σ. The estimation error by using NLLS fitting originates primarily from the insensitivity of DWI signal intensity to distribution width σ, as given in the function form of the Gaussian-type statistical DWI model.
A Ramp Cosine Cepstrum Model for the Parameter Estimation of Autoregressive Systems at Low SNR
Directory of Open Access Journals (Sweden)
Zhu Wei-Ping
2010-01-01
Full Text Available A new cosine cepstrum model-based scheme is presented for the parameter estimation of a minimum-phase autoregressive (AR system under low levels of signal-to-noise ratio (SNR. A ramp cosine cepstrum (RCC model for the one-sided autocorrelation function (OSACF of an AR signal is first proposed by considering both white noise and periodic impulse-train excitations. Using the RCC model, a residue-based least-squares optimization technique that guarantees the stability of the system is then presented in order to estimate the AR parameters from noisy output observations. For the purpose of implementation, the discrete cosine transform, which can efficiently handle the phase unwrapping problem and offer computational advantages as compared to the discrete Fourier transform, is employed. From extensive experimentations on AR systems of different orders, it is shown that the proposed method is capable of estimating parameters accurately and consistently in comparison to some of the existing methods for the SNR levels as low as −5 dB. As a practical application of the proposed technique, simulation results are also provided for the identification of a human vocal tract system using noise-corrupted natural speech signals demonstrating a superior estimation performance in terms of the power spectral density of the synthesized speech signals.
Parameter estimation for whole-body kinetic model of FDG metabolism
Institute of Scientific and Technical Information of China (English)
CUI Yunfeng; BAI Jing; CHEN Yingmao; TIAN Jiahe
2006-01-01
Based on the radioactive tracer [18F]2-fluoro-2-deoxy-D-glucose (FDG), positron emission tomography (PET), and compartment model, the tracer kinetic study has become an important method to investigate the glucose metabolic kinetics in human body.In this work, the kinetic parameters of three-compartment and four-parameter model for the FDG metabolism in the tissues of myocardium, lung, liver, stomach, spleen, pancreas, and marrow were estimated through some dynamic FDG-PET experiments. Together with published brain and skeletal muscle parameters, a relatively complete whole-body model was presented. In the liver model, the dual blood supply from the hepatic artery and the portal vein to the liver was considered for parameter estimation, and the more accurate results were obtained using the dual-input rather than the single arterial-input. The established whole-body model provides the functional information of FDG metabolism in human body. It can be used to further investigate the glucose metabolism, and also be used for the simulation and visualization of FDG metabolic process in human body.
Fisicaro, E; Braibanti, A; Sambasiva Rao, R; Compari, C; Ghiozzi, A; Nageswara Rao, G
1998-04-01
An algorithm is proposed for the estimation of binding parameters for the interaction of biologically important macromolecules with smaller ones from electrometric titration data. The mathematical model is based on the representation of equilibria in terms of probability concepts of statistical molecular thermodynamics. The refinement of equilibrium concentrations of the components and estimation of binding parameters (log site constant and cooperativity factor) is performed using singular value decomposition, a chemometric technique which overcomes the general obstacles due to near singularity. The present software is validated with a number of biochemical systems of varying number of sites and cooperativity factors. The effect of random errors of realistic magnitude in experimental data is studied using the simulated primary data for some typical systems. The safe area within which approximate binding parameters ensure convergence has been reported for the non-self starting optimization algorithms.
The limiting behavior of the estimated parameters in a misspecified random field regression model
DEFF Research Database (Denmark)
Dahl, Christian Møller; Qin, Yu
, as a consequence the random field model specification introduces non-stationarity and non-ergodicity in the misspecified model and it becomes non-trivial, relative to the existing literature, to establish the limiting behavior of the estimated parameters. The asymptotic results are obtained by applying some...... convenient new uniform convergence results that we propose. This theory may have applications beyond those presented here. Our results indicate that classical statistical inference techniques, in general, works very well for random field regression models in finite samples and that these models succesfully...
Parameter estimation of social forces in pedestrian dynamics models via a probabilistic method.
Corbetta, Alessandro; Muntean, Adrian; Vafayi, Kiamars
2015-04-01
Focusing on a specific crowd dynamics situation, including real life experiments and measurements, our paper targets a twofold aim: (1) we present a Bayesian probabilistic method to estimate the value and the uncertainty (in the form of a probability density function) of parameters in crowd dynamic models from the experimental data; and (2) we introduce a fitness measure for the models to classify a couple of model structures (forces) according to their fitness to the experimental data, preparing the stage for a more general model-selection and validation strategy inspired by probabilistic data analysis. Finally, we review the essential aspects of our experimental setup and measurement technique.
Maximum likelihood estimation of the parameters of nonminimum phase and noncausal ARMA models
DEFF Research Database (Denmark)
Rasmussen, Klaus Bolding
1994-01-01
The well-known prediction-error-based maximum likelihood (PEML) method can only handle minimum phase ARMA models. This paper presents a new method known as the back-filtering-based maximum likelihood (BFML) method, which can handle nonminimum phase and noncausal ARMA models. The BFML method is id...... is identical to the PEML method in the case of a minimum phase ARMA model, and it turns out that the BFML method incorporates a noncausal ARMA filter with poles outside the unit circle for estimation of the parameters of a causal, nonminimum phase ARMA model...
Individual based modeling and parameter estimation for a Lotka-Volterra system.
Waniewski, J; Jedruch, W
1999-03-15
Stochastic component, inevitable in biological systems, makes problematic the estimation of the model parameters from a single sequence of measurements, despite the complete knowledge of the system. We studied the problem of parameter estimation using individual-based computer simulations of a 'Lotka-Volterra world'. Two kinds (species) of particles--X (preys) and Y (predators)--moved on a sphere according to deterministic rules and at the collision (interaction) of X and Y the particle X was changed to a new particle Y. Birth of preys and death of predators were simulated by addition of X and removal of Y, respectively, according to exponential probability distributions. With this arrangement of the system, the numbers of particles of each kind might be described by the Lotka-Volterra equations. The simulations of the system with low (200-400 particles on average) number of individuals showed unstable oscillations of the population size. In some simulation runs one of the species became extinct. Nevertheless, the oscillations had some generic properties (e.g. mean, in one simulation run, oscillation period, mean ratio of the amplitudes of the consecutive maxima of X and Y numbers, etc.) characteristic for the solutions of the Lotka-Volterra equations. This observation made it possible to estimate the four parameters of the Lotka-Volterra model with high accuracy and good precision. The estimation was performed using the integral form of the Lotka-Volterra equations and two parameter linear regression for each oscillation cycle separately. We conclude that in spite of the irregular time course of the number of individuals in each population due to stochastic intraspecies component, the generic features of the simulated system evolution can provide enough information for quantitative estimation of the system parameters.
Estimating the properties of hard X-ray solar flares by constraining model parameters
Ireland, Jack; Schwartz, Richard A; Holman, Gordon D; Dennis, Brian R
2013-01-01
We compare four different methods of calculating uncertainty estimates in fitting parameterized models to RHESSI X-ray spectra, considering only statistical sources of error. Three of the four methods are based on estimating the scale-size of the minimum in a hypersurface formed by the weighted sum of the squares of the differences between the model fit and the data as a function of the fit parameters, and are implemented as commonly practiced. The fourth method uses Bayesian data analysis and Markov chain Monte Carlo (MCMC) techniques to calculate an uncertainty estimate. Two flare spectra are modeled: one from the GOES X1.3 class flare of 19 January 2005, and the other from the X4.8 flare of 23 July 2002. The four methods give approximately the same uncertainty estimates for the 19 January 2005 spectral fit parameters, but lead to very different uncertainty estimates for the 23 July 2002 spectral fit. This is because each method implements different analyses of the hypersurface, yielding method-dependent re...
Novel Method for Incorporating Model Uncertainties into Gravitational Wave Parameter Estimates
Moore, Christopher J
2014-01-01
Posterior distributions on parameters computed from experimental data using Bayesian techniques are only as accurate as the models used to construct them. In many applications these models are incomplete, which both reduces the prospects of detection and leads to a systematic error in the parameter estimates. In the analysis of data from gravitational wave detectors, for example, accurate waveform templates can be computed using numerical methods, but the prohibitive cost of these simulations means this can only be done for a small handful of parameters. In this work a novel method to fold model uncertainties into data analysis is proposed; the waveform uncertainty is analytically marginalised over using with a prior distribution constructed by using Gaussian process regression to interpolate the waveform difference from a small training set of accurate templates. The method is well motivated, easy to implement, and no more computationally expensive than standard techniques. The new method is shown to perform...
The CSS and The Two-Staged Methods for Parameter Estimation in SARFIMA Models
Directory of Open Access Journals (Sweden)
Erol Egrioglu
2011-01-01
Full Text Available Seasonal Autoregressive Fractionally Integrated Moving Average (SARFIMA models are used in the analysis of seasonal long memory-dependent time series. Two methods, which are conditional sum of squares (CSS and two-staged methods introduced by Hosking (1984, are proposed to estimate the parameters of SARFIMA models. However, no simulation study has been conducted in the literature. Therefore, it is not known how these methods behave under different parameter settings and sample sizes in SARFIMA models. The aim of this study is to show the behavior of these methods by a simulation study. According to results of the simulation, advantages and disadvantages of both methods under different parameter settings and sample sizes are discussed by comparing the root mean square error (RMSE obtained by the CSS and two-staged methods. As a result of the comparison, it is seen that CSS method produces better results than those obtained from the two-staged method.
A Bayesian Approach for Parameter Estimation and Prediction using a Computationally Intensive Model
Higdon, Dave; Schunck, Nicolas; Sarich, Jason; Wild, Stefan M
2014-01-01
Bayesian methods have been very successful in quantifying uncertainty in physics-based problems in parameter estimation and prediction. In these cases, physical measurements y are modeled as the best fit of a physics-based model $\\eta(\\theta)$ where $\\theta$ denotes the uncertain, best input setting. Hence the statistical model is of the form $y = \\eta(\\theta) + \\epsilon$, where $\\epsilon$ accounts for measurement, and possibly other error sources. When non-linearity is present in $\\eta(\\cdot)$, the resulting posterior distribution for the unknown parameters in the Bayesian formulation is typically complex and non-standard, requiring computationally demanding computational approaches such as Markov chain Monte Carlo (MCMC) to produce multivariate draws from the posterior. While quite generally applicable, MCMC requires thousands, or even millions of evaluations of the physics model $\\eta(\\cdot)$. This is problematic if the model takes hours or days to evaluate. To overcome this computational bottleneck, we pr...
A cooperative strategy for parameter estimation in large scale systems biology models
Directory of Open Access Journals (Sweden)
Villaverde Alejandro F
2012-06-01
Full Text Available Abstract Background Mathematical models play a key role in systems biology: they summarize the currently available knowledge in a way that allows to make experimentally verifiable predictions. Model calibration consists of finding the parameters that give the best fit to a set of experimental data, which entails minimizing a cost function that measures the goodness of this fit. Most mathematical models in systems biology present three characteristics which make this problem very difficult to solve: they are highly non-linear, they have a large number of parameters to be estimated, and the information content of the available experimental data is frequently scarce. Hence, there is a need for global optimization methods capable of solving this problem efficiently. Results A new approach for parameter estimation of large scale models, called Cooperative Enhanced Scatter Search (CeSS, is presented. Its key feature is the cooperation between different programs (“threads” that run in parallel in different processors. Each thread implements a state of the art metaheuristic, the enhanced Scatter Search algorithm (eSS. Cooperation, meaning information sharing between threads, modifies the systemic properties of the algorithm and allows to speed up performance. Two parameter estimation problems involving models related with the central carbon metabolism of E. coli which include different regulatory levels (metabolic and transcriptional are used as case studies. The performance and capabilities of the method are also evaluated using benchmark problems of large-scale global optimization, with excellent results. Conclusions The cooperative CeSS strategy is a general purpose technique that can be applied to any model calibration problem. Its capability has been demonstrated by calibrating two large-scale models of different characteristics, improving the performance of previously existing methods in both cases. The cooperative metaheuristic presented here
Genetic Algorithms for a Parameter Estimation of a Fermentation Process Model: A Comparison
Directory of Open Access Journals (Sweden)
Olympia Roeva
2005-12-01
Full Text Available In this paper the problem of a parameter estimation using genetic algorithms is examined. A case study considering the estimation of 6 parameters of a nonlinear dynamic model of E. coli fermentation is presented as a test problem. The parameter estimation problem is stated as a nonlinear programming problem subject to nonlinear differential-algebraic constraints. This problem is known to be frequently ill-conditioned and multimodal. Thus, traditional (gradient-based local optimization methods fail to arrive satisfied solutions. To overcome their limitations, the use of different genetic algorithms as stochastic global optimization methods is explored. These algorithms are proved to be very suitable for the optimization of highly non-linear problems with many variables. Genetic algorithms can guarantee global optimality and robustness. These facts make them advantageous in use for parameter identification of fermentation models. A comparison between simple, modified and multi-population genetic algorithms is presented. The best result is obtained using the modified genetic algorithm. The considered algorithms converged very closely to the cost value but the modified algorithm is in times faster than other two.
Directory of Open Access Journals (Sweden)
THANH TUNG KHUAT
2017-05-01
Full Text Available Artificial Bee Colony inspired by the foraging behaviour of honey bees is a novel meta-heuristic optimization algorithm in the community of swarm intelligence algorithms. Nevertheless, it is still insufficient in the speed of convergence and the quality of solutions. This paper proposes an approach in order to tackle these downsides by combining the positive aspects of TeachingLearning based optimization and Artificial Bee Colony. The performance of the proposed method is assessed on the software effort estimation problem, which is the complex and important issue in the project management. Software developers often carry out the software estimation in the early stages of the software development life cycle to derive the required cost and schedule for a project. There are a large number of methods for effort estimation in which COCOMO II is one of the most widely used models. However, this model has some restricts because its parameters have not been optimized yet. In this work, therefore, we will present the approach to overcome this limitation of COCOMO II model. The experiments have been conducted on NASA software project dataset and the obtained results indicated that the improvement of parameters provided better estimation capabilities compared to the original COCOMO II model.
Frequency-Domain Maximum-Likelihood Estimation of High-Voltage Pulse Transformer Model Parameters
Aguglia, D
2014-01-01
This paper presents an offline frequency-domain nonlinear and stochastic identification method for equivalent model parameter estimation of high-voltage pulse transformers. Such kinds of transformers are widely used in the pulsed-power domain, and the difficulty in deriving pulsed-power converter optimal control strategies is directly linked to the accuracy of the equivalent circuit parameters. These components require models which take into account electric fields energies represented by stray capacitance in the equivalent circuit. These capacitive elements must be accurately identified, since they greatly influence the general converter performances. A nonlinear frequency-based identification method, based on maximum-likelihood estimation, is presented, and a sensitivity analysis of the best experimental test to be considered is carried out. The procedure takes into account magnetic saturation and skin effects occurring in the windings during the frequency tests. The presented method is validated by experim...
Directory of Open Access Journals (Sweden)
Bae CY
2012-12-01
Full Text Available Chul-Young Bae,1 Young Gon Kang,2 Young-Sung Suh,3 Jee Hye Han,4 Sung-Soo Kim,5 Kyung Won Shim61MediAge Research Center, Seoul, Korea; 2Chaum Power Aging Center, College of Medicine, CHA University, Seoul, Korea; 3Health Promotion Center, Keimyung University Dongsam Medical Center, Daegu, Korea; 4Department of Family Medicine, College of Medicine, Eulji University, Seoul, Korea; 5Department of Family Medicine, College of Medicine, Chungnam National University, Daejeon, Korea; 6Department of Family Medicine, Ewha Womans University Mokdong Hospital, Ewha Womans University, Seoul, KoreaBackground: To date, no studies have attempted to estimate body shape biological age using clinical parameters associated with body composition for the purposes of examining a person's body shape based on their age.Objective: We examined the relations between clinical parameters associated with body composition and chronological age, and proposed a model for estimating the body shape biological age.Methods: The study was conducted in 243,778 subjects aged between 20 and 90 years who received a general medical checkup at health promotion centers at university and community hospitals in Korea from 2004 to 2011.Results: In men, the clinical parameters with the highest correlation to age included the waist-to-hip ratio (r = 0.786, P < 0.001, hip circumference (r = −0.448, P < 0.001, and height (r = −0.377, P < 0.001. In women, the clinical parameters with the highest correlation to age include the waist-to-hip ratio (r = 0.859, P < 0.001, waist circumference (r = 0.580, P < 0.001, and hip circumference (r = 0.520, P < 0.001. To estimate the optimal body shape biological age based on clinical parameters associated with body composition, we performed a multiple regression analysis. In a model estimating the body shape biological age, the coefficient of determination (R2 was 0.71 in men and 0.76 in women.Conclusion: Our model for estimating body shape biological age
Directory of Open Access Journals (Sweden)
Bin Deng
2013-01-01
Full Text Available Parabolic-reflector antennas (PRAs, usually possessing rotation, are a particular type of targets of potential interest to the synthetic aperture radar (SAR community. This paper is aimed to investigate PRA’s scattering characteristics and then to extract PRA’s parameters from SAR returns, for supporting image interpretation and target recognition. We at first obtain both closed-form and numeric solutions to PRA’s backscattering by geometrical optics (GO, physical optics, and graphical electromagnetic computation, respectively. Based on the GO solution, a migratory scattering center model is at first presented for representing the movement of the specular point with aspect angle, and then a hybrid model, named the migratory/micromotion scattering center (MMSC model, is proposed for characterizing a rotating PRA in the SAR geometry, which incorporates PRA’s rotation into its migratory scattering center model. Additionally, we in detail analyze PRA’s radar characteristics on radar cross-section, high-resolution range profiles, time-frequency distribution, and 2D images, which also confirm the models proposed. A maximal likelihood estimator is developed for jointly solving the MMSC model for PRA’s multiple parameters by optimization. By exploiting the aforementioned characteristics, the coarse parameter estimation guarantees convergency upon global minima. The signatures recovered can be favorably utilized for SAR image interpretation and target recognition.
A weighted least-squares method for parameter estimation in structured models
Galrinho, Miguel; Rojas, Cristian R.; Hjalmarsson, Håkan
2014-01-01
Parameter estimation in structured models is generally considered a difficult problem. For example, the prediction error method (PEM) typically gives a non-convex optimization problem, while it is difficult to incorporate structural information in subspace identification. In this contribution, we revisit the idea of iteratively using the weighted least-squares method to cope with the problem of non-convex optimization. The method is, essentially, a three-step method. First, a high order least...
Trap configuration and spacing influences parameter estimates in spatial capture-recapture models.
Directory of Open Access Journals (Sweden)
Catherine C Sun
Full Text Available An increasing number of studies employ spatial capture-recapture models to estimate population size, but there has been limited research on how different spatial sampling designs and trap configurations influence parameter estimators. Spatial capture-recapture models provide an advantage over non-spatial models by explicitly accounting for heterogeneous detection probabilities among individuals that arise due to the spatial organization of individuals relative to sampling devices. We simulated black bear (Ursus americanus populations and spatial capture-recapture data to evaluate the influence of trap configuration and trap spacing on estimates of population size and a spatial scale parameter, sigma, that relates to home range size. We varied detection probability and home range size, and considered three trap configurations common to large-mammal mark-recapture studies: regular spacing, clustered, and a temporal sequence of different cluster configurations (i.e., trap relocation. We explored trap spacing and number of traps per cluster by varying the number of traps. The clustered arrangement performed well when detection rates were low, and provides for easier field implementation than the sequential trap arrangement. However, performance differences between trap configurations diminished as home range size increased. Our simulations suggest it is important to consider trap spacing relative to home range sizes, with traps ideally spaced no more than twice the spatial scale parameter. While spatial capture-recapture models can accommodate different sampling designs and still estimate parameters with accuracy and precision, our simulations demonstrate that aspects of sampling design, namely trap configuration and spacing, must consider study area size, ranges of individual movement, and home range sizes in the study population.
Trap configuration and spacing influences parameter estimates in spatial capture-recapture models.
Sun, Catherine C; Fuller, Angela K; Royle, J Andrew
2014-01-01
An increasing number of studies employ spatial capture-recapture models to estimate population size, but there has been limited research on how different spatial sampling designs and trap configurations influence parameter estimators. Spatial capture-recapture models provide an advantage over non-spatial models by explicitly accounting for heterogeneous detection probabilities among individuals that arise due to the spatial organization of individuals relative to sampling devices. We simulated black bear (Ursus americanus) populations and spatial capture-recapture data to evaluate the influence of trap configuration and trap spacing on estimates of population size and a spatial scale parameter, sigma, that relates to home range size. We varied detection probability and home range size, and considered three trap configurations common to large-mammal mark-recapture studies: regular spacing, clustered, and a temporal sequence of different cluster configurations (i.e., trap relocation). We explored trap spacing and number of traps per cluster by varying the number of traps. The clustered arrangement performed well when detection rates were low, and provides for easier field implementation than the sequential trap arrangement. However, performance differences between trap configurations diminished as home range size increased. Our simulations suggest it is important to consider trap spacing relative to home range sizes, with traps ideally spaced no more than twice the spatial scale parameter. While spatial capture-recapture models can accommodate different sampling designs and still estimate parameters with accuracy and precision, our simulations demonstrate that aspects of sampling design, namely trap configuration and spacing, must consider study area size, ranges of individual movement, and home range sizes in the study population.
Bertipaglia, T S; Carreño, L O D; Aspilcueta-Borquis, R R; Boligon, A A; Farah, M M; Gomes, F J; Machado, C H C; Rey, F S B; da Fonseca, R
2015-08-01
Random regression models (RRM) and multitrait models (MTM) were used to estimate genetic parameters for growth traits in Brazilian Brahman cattle and to compare the estimated breeding values obtained by these 2 methodologies. For RRM, 78,641 weight records taken between 60 and 550 d of age from 16,204 cattle were analyzed, and for MTM, the analysis consisted of 17,385 weight records taken at the same ages from 12,925 cattle. All models included the fixed effects of contemporary group and the additive genetic, maternal genetic, and animal permanent environmental effects and the quadratic effect of age at calving (AAC) as covariate. For RRM, the AAC was nested in the animal's age class. The best RRM considered cubic polynomials and the residual variance heterogeneity (5 levels). For MTM, the weights were adjusted for standard ages. For RRM, additive heritability estimates ranged from 0.42 to 0.75, and for MTM, the estimates ranged from 0.44 to 0.72 for both models at 60, 120, 205, 365, and 550 d of age. The maximum maternal heritability estimate (0.08) was at 140 d for RRM, but for MTM, it was highest at weaning (0.09). The magnitude of the genetic correlations was generally from moderate to high. The RRM adequately modeled changes in variance or covariance with age, and provided there was sufficient number of samples, increased accuracy in the estimation of the genetic parameters can be expected. Correlation of bull classifications were different in both methods and at all the ages evaluated, especially at high selection intensities, which could affect the response to selection.
Directory of Open Access Journals (Sweden)
Nelson Peter
2006-11-01
Full Text Available Abstract Aim To estimate the key transmission parameters associated with an outbreak of pandemic influenza in an institutional setting (New Zealand 1918. Methods Historical morbidity and mortality data were obtained from the report of the medical officer for a large military camp. A susceptible-exposed-infectious-recovered epidemiological model was solved numerically to find a range of best-fit estimates for key epidemic parameters and an incidence curve. Mortality data were subsequently modelled by performing a convolution of incidence distribution with a best-fit incidence-mortality lag distribution. Results Basic reproduction number (R0 values for three possible scenarios ranged between 1.3, and 3.1, and corresponding average latent period and infectious period estimates ranged between 0.7 and 1.3 days, and 0.2 and 0.3 days respectively. The mean and median best-estimate incidence-mortality lag periods were 6.9 and 6.6 days respectively. This delay is consistent with secondary bacterial pneumonia being a relatively important cause of death in this predominantly young male population. Conclusion These R0 estimates are broadly consistent with others made for the 1918 influenza pandemic and are not particularly large relative to some other infectious diseases. This finding suggests that if a novel influenza strain of similar virulence emerged then it could potentially be controlled through the prompt use of major public health measures.
Gao, Z.; Zhang, K.; Xue, X.; Huang, J.; Hong, Y.
2016-12-01
Floods are among the most common natural disasters with worldwide impacts that cause significant humanitarian and economic negative consequences. The increasing availability of satellite-based precipitation estimates and geospatial datasets with global coverage and improved temporal resolutions has enhanced our capability of forecasting floods and monitoring water resources across the world. This study presents an approach combing physically based and empirical methods for a-priori parameter estimates and a parameter dataset for the Coupled Routing and Excess Storage (CREST) hydrological model at the global scale. This approach takes advantage of geographic information such as topography, land cover, and soil properties to derive the distributed parameter values across the world. The main objective of this study is to evaluate the utility of a-priori parameter estimates to improve the performance of the CREST distributed hydrologic model and enable its prediction at poorly gauged or ungauged catchments. Using the CREST hydrologic model, several typical river basins in different continents were selected to serve as test areas. The results show that the simulated daily stream flows using the parameters derived from geographically based information outperform the results using the lumped parameters. Overall, this early study highlights that a priori parameter estimates for hydrologic model warrants improved model predictive capability in ungauged basins at regional to global scales.
Estimation of anisotropy parameters in organic-rich shale: Rock physics forward modeling approach
Energy Technology Data Exchange (ETDEWEB)
Herawati, Ida, E-mail: ida.herawati@students.itb.ac.id; Winardhi, Sonny; Priyono, Awali [Mining and Petroleum Engineering Faculty, Institut Teknologi Bandung, Bandung, 40132 (Indonesia)
2015-09-30
Anisotropy analysis becomes an important step in processing and interpretation of seismic data. One of the most important things in anisotropy analysis is anisotropy parameter estimation which can be estimated using well data, core data or seismic data. In seismic data, anisotropy parameter calculation is generally based on velocity moveout analysis. However, the accuracy depends on data quality, available offset, and velocity moveout picking. Anisotropy estimation using seismic data is needed to obtain wide coverage of particular layer anisotropy. In anisotropic reservoir, analysis of anisotropy parameters also helps us to better understand the reservoir characteristics. Anisotropy parameters, especially ε, are related to rock property and lithology determination. Current research aims to estimate anisotropy parameter from seismic data and integrate well data with case study in potential shale gas reservoir. Due to complexity in organic-rich shale reservoir, extensive study from different disciplines is needed to understand the reservoir. Shale itself has intrinsic anisotropy caused by lamination of their formed minerals. In order to link rock physic with seismic response, it is necessary to build forward modeling in organic-rich shale. This paper focuses on studying relationship between reservoir properties such as clay content, porosity and total organic content with anisotropy. Organic content which defines prospectivity of shale gas can be considered as solid background or solid inclusion or both. From the forward modeling result, it is shown that organic matter presence increases anisotropy in shale. The relationships between total organic content and other seismic properties such as acoustic impedance and Vp/Vs are also presented.
Inverse and direct modeling applied in the estimation of kinetic parameters of BSA adsorption
Directory of Open Access Journals (Sweden)
David Curbelo Rodríguez
2010-11-01
Full Text Available Normal 0 21 false false false MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Tabla normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-ansi-language:#0400; mso-fareast-language:#0400; mso-bidi-language:#0400;} The kinetic modeling applied in the parameters estimation of chromatographic adsorption processes is an important tool in the understanding and improvement of these separation systems. In this work, two kinetic models were utilized in the parameters estimation of BSA adsorption. The correlations between the irreversible kinetic model and the reversible kinetic model with the experimental data were carried out using the Linear Driving Force and the Random Restricted Window (R2W method, respectively. From both models was possible to achieve a good fit with the experimental data, obtaining parameters with higher accuracy due to the low residues of the cost function.
Blackman, Jonathan; Field, Scott; Galley, Chad; Hemberger, Daniel; Scheel, Mark; Schmidt, Patricia; Smith, Rory; SXS Collaboration Collaboration
2016-03-01
We are now in the advanced detector era of gravitational wave astronomy, and the merger of two black holes (BHs) is one of the most promising sources of gravitational waves that could be detected on earth. To infer the BH masses and spins, the observed signal must be compared to waveforms predicted by general relativity for millions of binary configurations. Numerical relativity (NR) simulations can produce accurate waveforms, but are prohibitively expensive to use for parameter estimation. Other waveform models are fast enough but may lack accuracy in portions of the parameter space. Numerical relativity surrogate models attempt to rapidly predict the results of a NR code with a small or negligible modeling error, after being trained on a set of input waveforms. Such surrogate models are ideal for parameter estimation, as they are both fast and accurate, and have already been built for the case of non-spinning BHs. Using 250 input waveforms, we build a surrogate model for waveforms from the Spectral Einstein Code (SpEC) for a subspace of precessing systems.
Employing satellite retrieved soil moisture for parameter estimation of the hydrologic model mHM
Zink, Matthias; Mai, Juliane; Rakovec, Oldrich; Schrön, Martin; Kumar, Rohini; Schäfer, David; Samaniego, Luis
2016-04-01
Hydrological models are usually calibrated against observed streamflow at the catchment outlet and thus they are conditioned by an integral catchment signal. Rakovec et al. 2016 (JHM) recently demonstrated that constraining model parameters against river discharge is a necessary, but not a sufficient condition. Such a procedure ensures the fulfillment of the catchment's water balance but can lead to high predictive uncertainties of model internal states, like soil moisture, or a lack in spatial representativeness of the model. However, some hydrologic applications, as e.g. soil drought monitoring and prediction, rely on this information. Within this study we propose a framework in which the mesoscale Hydrologic Model (mHM) is calibrated with soil moisture retrievals from various sources. The aim is to condition the model on soil moisture (SM), while preserving good performance in streamflow estimation. We identify the most appropriate objective functions by conducting synthetic experiments. The best objective function is determined based on: 1) deviation between synthetic and simulated soil moisture, 2) nonparametric comparison of SM fields (e.g. copulas), and 3) by euclidian distance of model parameters, which is zero if the parameters of the synthetic data are recovered. Those objective functions performing best are used to calibrate mHM against different satellite soil moisture products, e.g. ESA-CCI, H-SAF, and in situ observations. This procedure is tested in three distinct European basins (upper Sava, Neckar, and upper Guadalquivir basin) ranging from snow domination to semi arid climatic conditions. Results obtained with the synthetic experiment indicate that objective functions focusing on the temporal dynamics of SM are preferable to objective functions aiming at spatial patterns or catchment averages. Since the deviation of soil moisture fields (1) and their copulas (2) don't lead to conclusive results, the decision of the best performing objective
A general method for parameter estimation in light-response models.
Chen, Lei; Li, Zhong-Bin; Hui, Cang; Cheng, Xiaofei; Li, Bai-Lian; Shi, Pei-Jian
2016-06-13
Selecting appropriate initial values is critical for parameter estimation in nonlinear photosynthetic light response models. Failed convergence often occurs due to wrongly selected initial values when using currently available methods, especially the kind of local optimization. There are no reliable methods that can resolve the conundrum of selecting appropriate initial values. After comparing the performance of the Levenberg-Marquardt algorithm and other three algorithms for global optimization, we develop a general method for parameter estimation in four photosynthetic light response models, based on the use of Differential Evolution (DE). The new method was shown to successfully provide good fits (R(2) > 0.98) and robust parameter estimates for 42 datasets collected for 21 plant species under the same initial values. It suggests that the DE algorithm can efficiently resolve the issue of hyper initial-value sensitivity when using local optimization methods. Therefore, the DE method can be applied to fit the light-response curves of various species without considering the initial values.
A general method for parameter estimation in light-response models
Chen, Lei; Li, Zhong-Bin; Hui, Cang; Cheng, Xiaofei; Li, Bai-Lian; Shi, Pei-Jian
2016-06-01
Selecting appropriate initial values is critical for parameter estimation in nonlinear photosynthetic light response models. Failed convergence often occurs due to wrongly selected initial values when using currently available methods, especially the kind of local optimization. There are no reliable methods that can resolve the conundrum of selecting appropriate initial values. After comparing the performance of the Levenberg–Marquardt algorithm and other three algorithms for global optimization, we develop a general method for parameter estimation in four photosynthetic light response models, based on the use of Differential Evolution (DE). The new method was shown to successfully provide good fits (R2 > 0.98) and robust parameter estimates for 42 datasets collected for 21 plant species under the same initial values. It suggests that the DE algorithm can efficiently resolve the issue of hyper initial-value sensitivity when using local optimization methods. Therefore, the DE method can be applied to fit the light-response curves of various species without considering the initial values.
Directory of Open Access Journals (Sweden)
Michala Jakubcová
2015-01-01
Full Text Available The presented paper provides the analysis of selected versions of the particle swarm optimization (PSO algorithm. The tested versions of the PSO were combined with the shuffling mechanism, which splits the model population into complexes and performs distributed PSO optimization. One of them is a new proposed PSO modification, APartW, which enhances the global exploration and local exploitation in the parametric space during the optimization process through the new updating mechanism applied on the PSO inertia weight. The performances of four selected PSO methods were tested on 11 benchmark optimization problems, which were prepared for the special session on single-objective real-parameter optimization CEC 2005. The results confirm that the tested new APartW PSO variant is comparable with other existing distributed PSO versions, AdaptW and LinTimeVarW. The distributed PSO versions were developed for finding the solution of inverse problems related to the estimation of parameters of hydrological model Bilan. The results of the case study, made on the selected set of 30 catchments obtained from MOPEX database, show that tested distributed PSO versions provide suitable estimates of Bilan model parameters and thus can be used for solving related inverse problems during the calibration process of studied water balance hydrological model.
Li, Shanzhi; Wang, Haoping; Aitouche, Abdel; Tian, Yang; Christov, Nicolai
2017-01-01
This paper proposes a robust unknown input observer for state estimation and fault detection using linear parameter varying model. Since the disturbance and actuator fault is mixed together in the physical system, it is difficult to isolate the fault from the disturbance. Using the state transforation, the estimation of the original state becomes to associate with the transform state. By solving the linear matrix inequalities (LMIs)and linear matrix equalities (LMEs), the parameters of the UIO can be obtained. The convergence of the UIO is also analysed by the Layapunov theory. Finally, a wind turbine system with disturbance and actuator fault is tested for the proposed method. From the simulations, it demonstrates the effectiveness and performances of the proposed method.
Directory of Open Access Journals (Sweden)
Y. Miyazawa
2013-04-01
Full Text Available With combined use of the ocean–atmosphere simulation models and field observation data, we evaluate the parameters associated with the total caesium-137 amounts of the direct release into the ocean and atmospheric deposition over the western North Pacific caused by the accident of Fukushima Daiichi nuclear power plant (FNPP that occurred in March 2011. The Green's function approach is adopted for the estimation of two parameters determining the total emission amounts for the period from 12 March to 6 May 2011. It is confirmed that the validity of the estimation depends on the simulation skill near FNPP. The total amount of the direct release is estimated as 5.5–5.9 × 1015 Bq, while that of the atmospheric deposition is estimated as 5.5–9.7 × 1015 Bq, which indicates broader range of the estimate than that of the direct release owing to uncertainty of the dispersion widely spread over the western North Pacific.
Directory of Open Access Journals (Sweden)
Y. Miyazawa
2012-10-01
Full Text Available With combined use of the ocean-atmosphere simulation models and field observation data, we evaluate the parameters associated with the total caesium-137 amounts of the direct release into the ocean and atmospheric deposition over the Western North Pacific caused by the accident of Fukushima Daiichi nuclear power plant (FNPP that occurred in March 2011. The Green's function approach is adopted for the estimation of two parameters determining the total emission amounts for the period from 12 March to 6 May 2011. It is confirmed that the validity of the estimation depends on the simulation skill near FNPP. The total amount of the direct release is estimated as 5.5–5.9 × 10^{15} Bq, while that of the atmospheric deposition is estimated as 5.5–9.7 × 10^{15} Bq, which indicates broader range of the estimate than that of the direct release owing to uncertainty of the dispersion widely spread over the Western North Pacific.
Parameter estimation of the copernicus decompression model with venous gas emboli in human divers.
Gutvik, Christian R; Dunford, Richard G; Dujic, Zeljko; Brubakk, Alf O
2010-07-01
Decompression Sickness (DCS) may occur when divers decompress from a hyperbaric environment. To prevent this, decompression procedures are used to get safely back to the surface. The models whose procedures are calculated from, are traditionally validated using clinical symptoms as an endpoint. However, DCS is an uncommon phenomenon and the wide variation in individual response to decompression stress is poorly understood. And generally, using clinical examination alone for validation is disadvantageous from a modeling perspective. Currently, the only objective and quantitative measure of decompression stress is Venous Gas Emboli (VGE), measured by either ultrasonic imaging or Doppler. VGE has been shown to be statistically correlated with DCS, and is now widely used in science to evaluate decompression stress from a dive. Until recently no mathematical model has existed to predict VGE from a dive, which motivated the development of the Copernicus model. The present article compiles a selection experimental dives and field data containing computer recorded depth profiles associated with ultrasound measurements of VGE. It describes a parameter estimation problem to fit the model with these data. A total of 185 square bounce dives from DCIEM, Canada, 188 recreational dives with a mix of single, repetitive and multi-day exposures from DAN USA and 84 experimentally designed decompression dives from Split Croatia were used, giving a total of 457 dives. Five selected parameters in the Copernicus bubble model were assigned for estimation and a non-linear optimization problem was formalized with a weighted least square cost function. A bias factor to the DCIEM chamber dives was also included. A Quasi-Newton algorithm (BFGS) from the TOMLAB numerical package solved the problem which was proved to be convex. With the parameter set presented in this article, Copernicus can be implemented in any programming language to estimate VGE from an air dive.
Directory of Open Access Journals (Sweden)
Aijia Ouyang
2015-01-01
Full Text Available Nonlinear Muskingum models are important tools in hydrological forecasting. In this paper, we have come up with a class of new discretization schemes including a parameter θ to approximate the nonlinear Muskingum model based on general trapezoid formulas. The accuracy of these schemes is second order, if θ≠1/3, but interestingly when θ=1/3, the accuracy of the presented scheme gets improved to third order. Then, the present schemes are transformed into an unconstrained optimization problem which can be solved by a hybrid invasive weed optimization (HIWO algorithm. Finally, a numerical example is provided to illustrate the effectiveness of the present methods. The numerical results substantiate the fact that the presented methods have better precision in estimating the parameters of nonlinear Muskingum models.
DEFF Research Database (Denmark)
Ditlevsen, Susanne; Samson, Adeline
2016-01-01
Dynamics of the membrane potential in a single neuron can be studied by estimating biophysical parameters from intracellular recordings. Diffusion processes, given as continuous solutions to stochastic differential equations, are widely applied as models for the neuronal membrane potential...... evolution. One-dimensional models are the stochastic integrate-and-fire neuronal diffusion models. Biophysical neuronal models take into account the dynamics of ion channels or synaptic activity, leading to multidimensional diffusion models. Since only the membrane potential can be measured......, this complicates the statistical inference and parameter estimation from these partially observed detailed models. This paper reviews parameter estimation techniques from intracellular recordings in these diffusion models....
López-Cuevas, Armando; Castillo-Toledo, Bernardino; Medina-Ceja, Laura; Ventura-Mejía, Consuelo
2015-06-01
Status epilepticus is an emergency condition in patients with prolonged seizure or recurrent seizures without full recovery between them. The pathophysiological mechanisms of status epilepticus are not well established. With this argument, we use a computational modeling approach combined with in vivo electrophysiological data obtained from an experimental model of status epilepticus to infer about changes that may lead to a seizure. Special emphasis is done to analyze parameter changes during or after pilocarpine administration. A cubature Kalman filter is utilized to estimate parameters and states of the model in real time from the observed electrophysiological signals. It was observed that during basal activity (before pilocarpine administration) the parameters presented a standard deviation below 30% of the mean value, while during SE activity, the parameters presented variations larger than 200% of the mean value with respect to basal state. The ratio of excitation-inhibition, increased during SE activity by 80% with respect to the transition state, and reaches the lowest value during cessation. In addition, a progression between low and fast inhibitions before or during this condition was found. This method can be implemented in real time, which is particularly important for the design of stimulation devices that attempt to stop seizures. These changes in the parameters analyzed during seizure activity can lead to better understanding of the mechanisms of epilepsy and to improve its treatments.
Aswath Damodaran
1999-01-01
Over the last three decades, the capital asset pricing model has occupied a central and often controversial place in most corporate finance analysts’ tool chests. The model requires three inputs to compute expected returns – a riskfree rate, a beta for an asset and an expected risk premium for the market portfolio (over and above the riskfree rate). Betas are estimated, by most practitioners, by regressing returns on an asset against a stock index, with the slope of the regression being the b...
Model parameter estimation bias induced by earthquake magnitude cut-off
Harte, D. S.
2016-02-01
We evaluate the bias in parameter estimates of the ETAS model. We show that when a simulated catalogue is magnitude-truncated there is considerable bias, whereas when it is not truncated there is no discernible bias. We also discuss two further implied assumptions in the ETAS and other self-exciting models. First, that the triggering boundary magnitude is equivalent to the catalogue completeness magnitude. Secondly, the assumption in the Gutenberg-Richter relationship that numbers of events increase exponentially as magnitude decreases. These two assumptions are confounded with the magnitude truncation effect. We discuss the effect of these problems on analyses of real earthquake catalogues.
Gupta, Ankur; Rawlings, James B
2014-04-01
Stochastic chemical kinetics has become a staple for mechanistically modeling various phenomena in systems biology. These models, even more so than their deterministic counterparts, pose a challenging problem in the estimation of kinetic parameters from experimental data. As a result of the inherent randomness involved in stochastic chemical kinetic models, the estimation methods tend to be statistical in nature. Three classes of estimation methods are implemented and compared in this paper. The first is the exact method, which uses the continuous-time Markov chain representation of stochastic chemical kinetics and is tractable only for a very restricted class of problems. The next class of methods is based on Markov chain Monte Carlo (MCMC) techniques. The third method, termed conditional density importance sampling (CDIS), is a new method introduced in this paper. The use of these methods is demonstrated on two examples taken from systems biology, one of which is a new model of single-cell viral infection. The applicability, strengths and weaknesses of the three classes of estimation methods are discussed. Using simulated data for the two examples, some guidelines are provided on experimental design to obtain more information from a limited number of measurements.
Silva, F. E. O. E.; Naghettini, M. D. C.; Fernandes, W.
2014-12-01
This paper evaluated the uncertainties associated with the estimation of the parameters of a conceptual rainfall-runoff model, through the use of Bayesian inference techniques by Monte Carlo simulation. The Pará River sub-basin, located in the upper São Francisco river basin, in southeastern Brazil, was selected for developing the studies. In this paper, we used the Rio Grande conceptual hydrologic model (EHR/UFMG, 2001) and the Markov Chain Monte Carlo simulation method named DREAM (VRUGT, 2008a). Two probabilistic models for the residues were analyzed: (i) the classic [Normal likelihood - r ≈ N (0, σ²)]; and (ii) a generalized likelihood (SCHOUPS & VRUGT, 2010), in which it is assumed that the differences between observed and simulated flows are correlated, non-stationary, and distributed as a Skew Exponential Power density. The assumptions made for both models were checked to ensure that the estimation of uncertainties in the parameters was not biased. The results showed that the Bayesian approach proved to be adequate to the proposed objectives, enabling and reinforcing the importance of assessing the uncertainties associated with hydrological modeling.
Caruso, Geoffrey; Cavailhès, Jean; Peeters, Dominique; Thomas, Isabelle; Frankhauser, Pierre; Vuidel, Gilles
2015-12-01
This paper describes a dataset of 6284 land transactions prices and plot surfaces in 3 medium-sized cities in France (Besançon, Dijon and Brest). The dataset includes road accessibility as obtained from a minimization algorithm, and the amount of green space available to households in the neighborhood of the transactions, as evaluated from a land cover dataset. Further to the data presentation, the paper describes how these variables can be used to estimate the non-observable parameters of a residential choice function explicitly derived from a microeconomic model. The estimates are used by Caruso et al. (2015) to run a calibrated microeconomic urban growth simulation model where households are assumed to trade-off accessibility and local green space amenities.
Gong, Wei; Duan, Qingyun; Li, Jianduo; Wang, Chen; Di, Zhenhua; Ye, Aizhong; Miao, Chiyuan; Dai, Yongjiu
2016-03-01
Parameter specification is an important source of uncertainty in large, complex geophysical models. These models generally have multiple model outputs that require multiobjective optimization algorithms. Although such algorithms have long been available, they usually require a large number of model runs and are therefore computationally expensive for large, complex dynamic models. In this paper, a multiobjective adaptive surrogate modeling-based optimization (MO-ASMO) algorithm is introduced that aims to reduce computational cost while maintaining optimization effectiveness. Geophysical dynamic models usually have a prior parameterization scheme derived from the physical processes involved, and our goal is to improve all of the objectives by parameter calibration. In this study, we developed a method for directing the search processes toward the region that can improve all of the objectives simultaneously. We tested the MO-ASMO algorithm against NSGA-II and SUMO with 13 test functions and a land surface model - the Common Land Model (CoLM). The results demonstrated the effectiveness and efficiency of MO-ASMO.
Directory of Open Access Journals (Sweden)
Alaa F. Sheta
2016-04-01
Full Text Available In this age of technology, building quality software is essential to competing in the business market. One of the major principles required for any quality and business software product for value fulfillment is reliability. Estimating software reliability early during the software development life cycle saves time and money as it prevents spending larger sums fixing a defective software product after deployment. The Software Reliability Growth Model (SRGM can be used to predict the number of failures that may be encountered during the software testing process. In this paper we explore the advantages of the Grey Wolf Optimization (GWO algorithm in estimating the SRGM’s parameters with the objective of minimizing the difference between the estimated and the actual number of failures of the software system. We evaluated three different software reliability growth models: the Exponential Model (EXPM, the Power Model (POWM and the Delayed S-Shaped Model (DSSM. In addition, we used three different datasets to conduct an experimental study in order to show the effectiveness of our approach.
Directory of Open Access Journals (Sweden)
Nazzareno Pierdicca
2008-12-01
Full Text Available The potentiality of polarimetric SAR data for the estimation of bare soil geophysical parameters (i.e., roughness and soil moisture is investigated in this work. For this purpose, two forward models available in the literature, able to simulate the measurements of a multifrequency radar polarimeter, have been implemented for use within an inversion scheme. A multiplicative noise has been considered in the multidimensional space of the elements of the polarimetric Covariance Matrix, by adopting a complex Wishart distribution to account for speckle effects. An additive error has been also introduced on the simulated measurements to account for calibration and model errors. Maximum a Posteriori Probability and Minimum Variance criteria have been considered to perform the inversion. As for the algorithms to implement the criteria, simple optimization/integration procedures have been used. A Neural Network approach has been adopted as well. A correlation between the roughness parameters has been also supposed in the simulation as a priori information, to evaluate its effect on the estimation accuracy. The methods have been tested on simulated data to compare their performances as function of number of looks, incidence angles and frequency bands, thus identifying the best radar configuration in terms of estimation accuracy. Polarimetric measurements acquired during MAC Europe and SIR-C campaigns, over selected bare soil fields, have been also used as validation data.
Estimating the Properties of Hard X-Ray Solar Flares by Constraining Model Parameters
Ireland, J.; Tolbert, A. K.; Schwartz, R. A.; Holman, G. D.; Dennis, B. R.
2013-01-01
We wish to better constrain the properties of solar flares by exploring how parameterized models of solar flares interact with uncertainty estimation methods. We compare four different methods of calculating uncertainty estimates in fitting parameterized models to Ramaty High Energy Solar Spectroscopic Imager X-ray spectra, considering only statistical sources of error. Three of the four methods are based on estimating the scale-size of the minimum in a hypersurface formed by the weighted sum of the squares of the differences between the model fit and the data as a function of the fit parameters, and are implemented as commonly practiced. The fourth method is also based on the difference between the data and the model, but instead uses Bayesian data analysis and Markov chain Monte Carlo (MCMC) techniques to calculate an uncertainty estimate. Two flare spectra are modeled: one from the Geostationary Operational Environmental Satellite X1.3 class flare of 2005 January 19, and the other from the X4.8 flare of 2002 July 23.We find that the four methods give approximately the same uncertainty estimates for the 2005 January 19 spectral fit parameters, but lead to very different uncertainty estimates for the 2002 July 23 spectral fit. This is because each method implements different analyses of the hypersurface, yielding method-dependent results that can differ greatly depending on the shape of the hypersurface. The hypersurface arising from the 2005 January 19 analysis is consistent with a normal distribution; therefore, the assumptions behind the three non- Bayesian uncertainty estimation methods are satisfied and similar estimates are found. The 2002 July 23 analysis shows that the hypersurface is not consistent with a normal distribution, indicating that the assumptions behind the three non-Bayesian uncertainty estimation methods are not satisfied, leading to differing estimates of the uncertainty. We find that the shape of the hypersurface is crucial in understanding
Gerberich, Matthew W.; Oleson, Steven R.
2013-01-01
The Collaborative Modeling for Parametric Assessment of Space Systems (COMPASS) team at Glenn Research Center has performed integrated system analysis of conceptual spacecraft mission designs since 2006 using a multidisciplinary concurrent engineering process. The set of completed designs was archived in a database, to allow for the study of relationships between design parameters. Although COMPASS uses a parametric spacecraft costing model, this research investigated the possibility of using a top-down approach to rapidly estimate the overall vehicle costs. This paper presents the relationships between significant design variables, including breakdowns of dry mass, wet mass, and cost. It also develops a model for a broad estimate of these parameters through basic mission characteristics, including the target location distance, the payload mass, the duration, the delta-v requirement, and the type of mission, propulsion, and electrical power. Finally, this paper examines the accuracy of this model in regards to past COMPASS designs, with an assessment of outlying spacecraft, and compares the results to historical data of completed NASA missions.
A Gaussian mixture model based cost function for parameter estimation of chaotic biological systems
Shekofteh, Yasser; Jafari, Sajad; Sprott, Julien Clinton; Hashemi Golpayegani, S. Mohammad Reza; Almasganj, Farshad
2015-02-01
As we know, many biological systems such as neurons or the heart can exhibit chaotic behavior. Conventional methods for parameter estimation in models of these systems have some limitations caused by sensitivity to initial conditions. In this paper, a novel cost function is proposed to overcome those limitations by building a statistical model on the distribution of the real system attractor in state space. This cost function is defined by the use of a likelihood score in a Gaussian mixture model (GMM) which is fitted to the observed attractor generated by the real system. Using that learned GMM, a similarity score can be defined by the computed likelihood score of the model time series. We have applied the proposed method to the parameter estimation of two important biological systems, a neuron and a cardiac pacemaker, which show chaotic behavior. Some simulated experiments are given to verify the usefulness of the proposed approach in clean and noisy conditions. The results show the adequacy of the proposed cost function.
Directory of Open Access Journals (Sweden)
W. Castaings
2009-04-01
Full Text Available Variational methods are widely used for the analysis and control of computationally intensive spatially distributed systems. In particular, the adjoint state method enables a very efficient calculation of the derivatives of an objective function (response function to be analysed or cost function to be optimised with respect to model inputs.
In this contribution, it is shown that the potential of variational methods for distributed catchment scale hydrology should be considered. A distributed flash flood model, coupling kinematic wave overland flow and Green Ampt infiltration, is applied to a small catchment of the Thoré basin and used as a relatively simple (synthetic observations but didactic application case.
It is shown that forward and adjoint sensitivity analysis provide a local but extensive insight on the relation between the assigned model parameters and the simulated hydrological response. Spatially distributed parameter sensitivities can be obtained for a very modest calculation effort (~6 times the computing time of a single model run and the singular value decomposition (SVD of the Jacobian matrix provides an interesting perspective for the analysis of the rainfall-runoff relation.
For the estimation of model parameters, adjoint-based derivatives were found exceedingly efficient in driving a bound-constrained quasi-Newton algorithm. The reference parameter set is retrieved independently from the optimization initial condition when the very common dimension reduction strategy (i.e. scalar multipliers is adopted.
Furthermore, the sensitivity analysis results suggest that most of the variability in this high-dimensional parameter space can be captured with a few orthogonal directions. A parametrization based on the SVD leading singular vectors was found very promising but should be combined with another regularization strategy in order to prevent overfitting.
Castaings, W.; Dartus, D.; Le Dimet, F.-X.; Saulnier, G.-M.
2009-04-01
Variational methods are widely used for the analysis and control of computationally intensive spatially distributed systems. In particular, the adjoint state method enables a very efficient calculation of the derivatives of an objective function (response function to be analysed or cost function to be optimised) with respect to model inputs. In this contribution, it is shown that the potential of variational methods for distributed catchment scale hydrology should be considered. A distributed flash flood model, coupling kinematic wave overland flow and Green Ampt infiltration, is applied to a small catchment of the Thoré basin and used as a relatively simple (synthetic observations) but didactic application case. It is shown that forward and adjoint sensitivity analysis provide a local but extensive insight on the relation between the assigned model parameters and the simulated hydrological response. Spatially distributed parameter sensitivities can be obtained for a very modest calculation effort (~6 times the computing time of a single model run) and the singular value decomposition (SVD) of the Jacobian matrix provides an interesting perspective for the analysis of the rainfall-runoff relation. For the estimation of model parameters, adjoint-based derivatives were found exceedingly efficient in driving a bound-constrained quasi-Newton algorithm. The reference parameter set is retrieved independently from the optimization initial condition when the very common dimension reduction strategy (i.e. scalar multipliers) is adopted. Furthermore, the sensitivity analysis results suggest that most of the variability in this high-dimensional parameter space can be captured with a few orthogonal directions. A parametrization based on the SVD leading singular vectors was found very promising but should be combined with another regularization strategy in order to prevent overfitting.
HESS Opinions: Advocating process modeling and de-emphasizing parameter estimation
Bahremand, Abdolreza
2016-04-01
Since its origins as an engineering discipline, with its widespread use of "black box" (empirical) modeling approaches, hydrology has evolved into a scientific discipline that seeks a more "white box" (physics-based) modeling approach to solving problems such as the description and simulation of the rainfall-runoff responses of a watershed. There has been much recent debate regarding the future of the hydrological sciences, and several publications have voiced opinions on this subject. This opinion paper seeks to comment and expand upon some recent publications that have advocated an increased focus on process-based modeling while de-emphasizing the focus on detailed attention to parameter estimation. In particular, it offers a perspective that emphasizes a more hydraulic (more physics-based and less empirical) approach to development and implementation of hydrological models.
Inverse problem theory methods for data fitting and model parameter estimation
Tarantola, A
2002-01-01
Inverse Problem Theory is written for physicists, geophysicists and all scientists facing the problem of quantitative interpretation of experimental data. Although it contains a lot of mathematics, it is not intended as a mathematical book, but rather tries to explain how a method of acquisition of information can be applied to the actual world.The book provides a comprehensive, up-to-date description of the methods to be used for fitting experimental data, or to estimate model parameters, and to unify these methods into the Inverse Problem Theory. The first part of the book deals wi
Novel method for incorporating model uncertainties into gravitational wave parameter estimates.
Moore, Christopher J; Gair, Jonathan R
2014-12-19
Posterior distributions on parameters computed from experimental data using Bayesian techniques are only as accurate as the models used to construct them. In many applications, these models are incomplete, which both reduces the prospects of detection and leads to a systematic error in the parameter estimates. In the analysis of data from gravitational wave detectors, for example, accurate waveform templates can be computed using numerical methods, but the prohibitive cost of these simulations means this can only be done for a small handful of parameters. In this Letter, a novel method to fold model uncertainties into data analysis is proposed; the waveform uncertainty is analytically marginalized over using with a prior distribution constructed by using Gaussian process regression to interpolate the waveform difference from a small training set of accurate templates. The method is well motivated, easy to implement, and no more computationally expensive than standard techniques. The new method is shown to perform extremely well when applied to a toy problem. While we use the application to gravitational wave data analysis to motivate and illustrate the technique, it can be applied in any context where model uncertainties exist.
Linear and nonlinear ARMA model parameter estimation using an artificial neural network
Chon, K. H.; Cohen, R. J.
1997-01-01
This paper addresses parametric system identification of linear and nonlinear dynamic systems by analysis of the input and output signals. Specifically, we investigate the relationship between estimation of the system using a feedforward neural network model and estimation of the system by use of linear and nonlinear autoregressive moving-average (ARMA) models. By utilizing a neural network model incorporating a polynomial activation function, we show the equivalence of the artificial neural network to the linear and nonlinear ARMA models. We compare the parameterization of the estimated system using the neural network and ARMA approaches by utilizing data generated by means of computer simulations. Specifically, we show that the parameters of a simulated ARMA system can be obtained from the neural network analysis of the simulated data or by conventional least squares ARMA analysis. The feasibility of applying neural networks with polynomial activation functions to the analysis of experimental data is explored by application to measurements of heart rate (HR) and instantaneous lung volume (ILV) fluctuations.
Zhang, Feng; Li, Xiang-yang; Qian, Keran
2017-02-01
Shale is observed to have strong transverse isotropy due to its complex intrinsic properties on a small scale. An improved rock physics model has been developed to effectively model this intrinsic anisotropy. Several effective medium theories (Backus averaging, differential effective medium theory and self-consistent approximation) are validated and used in different steps of the workflow to simulate the effects of clay minerals, crack-like pores, kerogen and their preferred orientation on the elastic anisotropy. Anisotropic solid clay is constructed by using different clay mineral constituents instead of assuming it to be an equivalent isotropic or transversely isotropic medium. We differentiate between the voids associated with clay and the voids associated with other minerals based on their varied geometries and their different contributions to the anisotropy. The degree of alignment of clay particles, interconnected pore fluid and kerogen has a great influence on the elastic properties of shale. Therefore, in addition to the pore aspect ratio (asp), a new parameter called the lamination index (LI) related to the distribution of clay particle orientation is proposed and needs to be estimated during the modeling. We then present a practical inversion scheme to enable the prediction of anisotropy parameters for both vertical and horizontal well logs by estimating the lamination index and the pore aspect ratio simultaneously. The predicted elastic constants are demonstrated by using the published laboratory measurements of some Greenhorn shale, and they show better accuracy than the estimations in the existing literature. This model takes different rock properties into consideration and is thus generalized for shale formations from different areas. The application of this model to the well logs of some Upper Triassic shale in the Sichuan basin, and the analyzed results, are presented in part 2 of this paper.
Parameter estimation in a simple stochastic differential equation for phytoplankton modelling
DEFF Research Database (Denmark)
Møller, Jan Kloppenborg; Madsen, Henrik; Carstensen, Jacob
2011-01-01
The use of stochastic differential equations (SDEs) for simulation of aquatic ecosystems has attracted increasing attention in recent years. The SDE setting also provides the opportunity for statistical estimation of ecosystem parameters. We present an estimation procedure, based on Kalman...... filtering and likelihood estimation, which has proven useful in other fields of application. The estimation procedure is presented and the development from ordinary differential equations (ODEs) to SDEs is discussed with emphasis on autocorrelated residuals, commonly encountered with ODEs. The estimation...
Data Handling and Parameter Estimation
DEFF Research Database (Denmark)
Sin, Gürkan; Gernaey, Krist
2016-01-01
literature that are mostly based on the ActivatedSludge Model (ASM) framework and their appropriate extensions (Henze et al., 2000).The chapter presents an overview of the most commonly used methods in the estimation of parameters from experimental batch data, namely: (i) data handling and validation, (ii...
Selection of the Linear Regression Model According to the Parameter Estimation
Institute of Scientific and Technical Information of China (English)
无
2000-01-01
In this paper, based on the theory of parameter estimation, we give a selection method and ,in a sense of a good character of the parameter estimation,we think that it is very reasonable. Moreover,we offera calculation method of selection statistic and an applied example.
A comparison of the parameter estimating procedures for the Michaelis-Menten model.
Tseng, S J; Hsu, J P
1990-08-23
The performance of four parameter estimating procedures for the estimation of the adjustable parameters in the Michaelis-Menten model, the maximum initial rate Vmax, and the Michaelis-Menten constant Km, including Lineweaver & Burk transformation (L-B), Eadie & Hofstee transformation (E-H), Eisenthal & Cornish-Bowden transformation (ECB), and Hsu & Tseng random search (H-T) is compared. The analysis of the simulated data reveals the followings: (i) Vmax can be estimated more precisely than Km. (ii) The sum of square errors, from the smallest to the largest, follows the sequence H-T, E-H, ECB, L-B. (iii) Considering the sum of square errors, relative error, and computing time, the overall performance follows the sequence H-T, L-B, E-H, ECB, from the best to the worst. (iv) The performance of E-H and ECB are on the same level. (v) L-B and E-H are appropriate for pricesly measured data. H-T should be adopted for data whose error level are high. (vi) Increasing the number of data points has a positive effect on the performance of H-T, and a negative effect on the performance of L-B, E-H, and ECB.
Estimation of temporal gait parameters using Bayesian models on acceleration signals.
López-Nava, I H; Muñoz-Meléndez, A; Pérez Sanpablo, A I; Alessi Montero, A; Quiñones Urióstegui, I; Núñez Carrera, L
2016-01-01
The purpose of this study is to develop a system capable of performing calculation of temporal gait parameters using two low-cost wireless accelerometers and artificial intelligence-based techniques as part of a larger research project for conducting human gait analysis. Ten healthy subjects of different ages participated in this study and performed controlled walking tests. Two wireless accelerometers were placed on their ankles. Raw acceleration signals were processed in order to obtain gait patterns from characteristic peaks related to steps. A Bayesian model was implemented to classify the characteristic peaks into steps or nonsteps. The acceleration signals were segmented based on gait events, such as heel strike and toe-off, of actual steps. Temporal gait parameters, such as cadence, ambulation time, step time, gait cycle time, stance and swing phase time, simple and double support time, were estimated from segmented acceleration signals. Gait data-sets were divided into two groups of ages to test Bayesian models in order to classify the characteristic peaks. The mean error obtained from calculating the temporal gait parameters was 4.6%. Bayesian models are useful techniques that can be applied to classification of gait data of subjects at different ages with promising results.
Bayesian Parameter Estimation and Segmentation in the Multi-Atlas Random Orbit Model.
Directory of Open Access Journals (Sweden)
Xiaoying Tang
Full Text Available This paper examines the multiple atlas random diffeomorphic orbit model in Computational Anatomy (CA for parameter estimation and segmentation of subcortical and ventricular neuroanatomy in magnetic resonance imagery. We assume that there exist multiple magnetic resonance image (MRI atlases, each atlas containing a collection of locally-defined charts in the brain generated via manual delineation of the structures of interest. We focus on maximum a posteriori estimation of high dimensional segmentations of MR within the class of generative models representing the observed MRI as a conditionally Gaussian random field, conditioned on the atlas charts and the diffeomorphic change of coordinates of each chart that generates it. The charts and their diffeomorphic correspondences are unknown and viewed as latent or hidden variables. We demonstrate that the expectation-maximization (EM algorithm arises naturally, yielding the likelihood-fusion equation which the a posteriori estimator of the segmentation labels maximizes. The likelihoods being fused are modeled as conditionally Gaussian random fields with mean fields a function of each atlas chart under its diffeomorphic change of coordinates onto the target. The conditional-mean in the EM algorithm specifies the convex weights with which the chart-specific likelihoods are fused. The multiple atlases with the associated convex weights imply that the posterior distribution is a multi-modal representation of the measured MRI. Segmentation results for subcortical and ventricular structures of subjects, within populations of demented subjects, are demonstrated, including the use of multiple atlases across multiple diseased groups.
Bayesian Parameter Estimation and Segmentation in the Multi-Atlas Random Orbit Model.
Tang, Xiaoying; Oishi, Kenichi; Faria, Andreia V; Hillis, Argye E; Albert, Marilyn S; Mori, Susumu; Miller, Michael I
2013-01-01
This paper examines the multiple atlas random diffeomorphic orbit model in Computational Anatomy (CA) for parameter estimation and segmentation of subcortical and ventricular neuroanatomy in magnetic resonance imagery. We assume that there exist multiple magnetic resonance image (MRI) atlases, each atlas containing a collection of locally-defined charts in the brain generated via manual delineation of the structures of interest. We focus on maximum a posteriori estimation of high dimensional segmentations of MR within the class of generative models representing the observed MRI as a conditionally Gaussian random field, conditioned on the atlas charts and the diffeomorphic change of coordinates of each chart that generates it. The charts and their diffeomorphic correspondences are unknown and viewed as latent or hidden variables. We demonstrate that the expectation-maximization (EM) algorithm arises naturally, yielding the likelihood-fusion equation which the a posteriori estimator of the segmentation labels maximizes. The likelihoods being fused are modeled as conditionally Gaussian random fields with mean fields a function of each atlas chart under its diffeomorphic change of coordinates onto the target. The conditional-mean in the EM algorithm specifies the convex weights with which the chart-specific likelihoods are fused. The multiple atlases with the associated convex weights imply that the posterior distribution is a multi-modal representation of the measured MRI. Segmentation results for subcortical and ventricular structures of subjects, within populations of demented subjects, are demonstrated, including the use of multiple atlases across multiple diseased groups.
Testing variational estimation of process parameters and initial conditions of an earth system model
Directory of Open Access Journals (Sweden)
Simon Blessing
2014-03-01
Full Text Available We present a variational assimilation system around a coarse resolution Earth System Model (ESM and apply it for estimating initial conditions and parameters of the model. The system is based on derivative information that is efficiently provided by the ESM's adjoint, which has been generated through automatic differentiation of the model's source code. In our variational approach, the length of the feasible assimilation window is limited by the size of the domain in control space over which the approximation by the derivative is valid. This validity domain is reduced by non-smooth process representations. We show that in this respect the ocean component is less critical than the atmospheric component. We demonstrate how the feasible assimilation window can be extended to several weeks by modifying the implementation of specific process representations and by switching off processes such as precipitation.
Directory of Open Access Journals (Sweden)
Rajesh Singh
2016-06-01
Full Text Available In this paper, the failure intensity has been characterized by one parameter length biased exponential class Software Reliability Growth Model (SRGM considering the Poisson process of occurrence of software failures. This proposed length biased exponential class model is a function of parameters namely; total number of failures θ0 and scale parameter θ1. It is assumed that very little or no information is available about both these parameters. The Bayes estimators for parameters θ0 and θ1 have been obtained using non-informative priors for each parameter under square error loss function. The Monte Carlo simulation technique is used to study the performance of proposed Bayes estimators against their corresponding maximum likelihood estimators on the basis of risk efficiencies. It is concluded that both the proposed Bayes estimators of total number of failures and scale parameter perform well for proper choice of execution time.
Parameter estimation of the vibrational model for the SCOLE experimental facility
Crotts, B. D.; Kakad, Y. P.
1994-01-01
The objective of this study is to experimentally determine an empirical model of the vibrational dynamics of the Spacecraft COntrol Laboratory Experiment (SCOLE) facility. The first two flexible modes of this test article are identified using a linear least-square identification procedure and the data utilized for this procedure are obtained by exciting the structure from a quiescent state with torque wheels. The time history data of rate gyro sensors and accelerometers due to excitation and after excitation in terms of free-decay are used in the parameter estimation of the vibrational model. The free-decay portion of the data is analyzed using the Discrete Fourier transform to determine the optimal model order to use in modelling the response. Linear least-square analysis is then used to select the parameters that best fit the output of an Autoregressive (AR) model to the data. The control effectiveness of the torque wheels is then determined using the excitation portion of the test data, again using linear least squares.
Waller, Niels G; Feuerstahler, Leah
2017-03-17
In this study, we explored item and person parameter recovery of the four-parameter model (4PM) in over 24,000 real, realistic, and idealized data sets. In the first analyses, we fit the 4PM and three alternative models to data from three Minnesota Multiphasic Personality Inventory-Adolescent form factor scales using Bayesian modal estimation (BME). Our results indicated that the 4PM fits these scales better than simpler item Response Theory (IRT) models. Next, using the parameter estimates from these real data analyses, we estimated 4PM item parameters in 6,000 realistic data sets to establish minimum sample size requirements for accurate item and person parameter recovery. Using a factorial design that crossed discrete levels of item parameters, sample size, and test length, we also fit the 4PM to an additional 18,000 idealized data sets to extend our parameter recovery findings. Our combined results demonstrated that 4PM item parameters and parameter functions (e.g., item response functions) can be accurately estimated using BME in moderate to large samples (N ⩾ 5, 000) and person parameters can be accurately estimated in smaller samples (N ⩾ 1, 000). In the supplemental files, we report annotated [Formula: see text] code that shows how to estimate 4PM item and person parameters in [Formula: see text] (Chalmers, 2012 ).
PARAMETER ESTIMATION IN NON-HOMOGENEOUS BOOLEAN MODELS: AN APPLICATION TO PLANT DEFENSE RESPONSE
Directory of Open Access Journals (Sweden)
Maria Angeles Gallego
2014-11-01
Full Text Available Many medical and biological problems require to extract information from microscopical images. Boolean models have been extensively used to analyze binary images of random clumps in many scientific fields. In this paper, a particular type of Boolean model with an underlying non-stationary point process is considered. The intensity of the underlying point process is formulated as a fixed function of the distance to a region of interest. A method to estimate the parameters of this Boolean model is introduced, and its performance is checked in two different settings. Firstly, a comparative study with other existent methods is done using simulated data. Secondly, the method is applied to analyze the longleaf data set, which is a very popular data set in the context of point processes included in the R package spatstat. Obtained results show that the new method provides as accurate estimates as those obtained with more complex methods developed for the general case. Finally, to illustrate the application of this model and this method, a particular type of phytopathological images are analyzed. These images show callose depositions in leaves of Arabidopsis plants. The analysis of callose depositions, is very popular in the phytopathological literature to quantify activity of plant immunity.
On Drift Parameter Estimation in Models with Fractional Brownian Motion by Discrete Observations
Directory of Open Access Journals (Sweden)
Yuliya Mishura
2014-06-01
Full Text Available We study a problem of an unknown drift parameter estimation in a stochastic differen- tial equation driven by fractional Brownian motion. We represent the likelihood ratio as a function of the observable process. The form of this representation is in general rather complicated. However, in the simplest case it can be simplified and we can discretize it to establish the a. s. convergence of the discretized version of maximum likelihood estimator to the true value of parameter. We also investigate a non-standard estimator of the drift parameter showing further its strong consistency.
Hendricks Franssen, H. J.; Post, H.; Vrugt, J. A.; Fox, A. M.; Baatz, R.; Kumbhar, P.; Vereecken, H.
2015-12-01
Estimation of net ecosystem exchange (NEE) by land surface models is strongly affected by uncertain ecosystem parameters and initial conditions. A possible approach is the estimation of plant functional type (PFT) specific parameters for sites with measurement data like NEE and application of the parameters at other sites with the same PFT and no measurements. This upscaling strategy was evaluated in this work for sites in Germany and France. Ecosystem parameters and initial conditions were estimated with NEE-time series of one year length, or a time series of only one season. The DREAM(zs) algorithm was used for the estimation of parameters and initial conditions. DREAM(zs) is not limited to Gaussian distributions and can condition to large time series of measurement data simultaneously. DREAM(zs) was used in combination with the Community Land Model (CLM) v4.5. Parameter estimates were evaluated by model predictions at the same site for an independent verification period. In addition, the parameter estimates were evaluated at other, independent sites situated >500km away with the same PFT. The main conclusions are: i) simulations with estimated parameters reproduced better the NEE measurement data in the verification periods, including the annual NEE-sum (23% improvement), annual NEE-cycle and average diurnal NEE course (error reduction by factor 1,6); ii) estimated parameters based on seasonal NEE-data outperformed estimated parameters based on yearly data; iii) in addition, those seasonal parameters were often also significantly different from their yearly equivalents; iv) estimated parameters were significantly different if initial conditions were estimated together with the parameters. We conclude that estimated PFT-specific parameters improve land surface model predictions significantly at independent verification sites and for independent verification periods so that their potential for upscaling is demonstrated. However, simulation results also indicate
Optimal Selection of the Sampling Interval for Estimation of Modal Parameters by an ARMA- Model
DEFF Research Database (Denmark)
Kirkegaard, Poul Henning
1993-01-01
Optimal selection of the sampling interval for estimation of the modal parameters by an ARMA-model for a white noise loaded structure modelled as a single degree of- freedom linear mechanical system is considered. An analytical solution for an optimal uniform sampling interval, which is optimal...... in a Fisherian sense, is given. The solution is investigated by a simulation study. It is shown that if the experimental length T1 is fixed it may be useful to sample the record at a high sampling rate, since more measurements from the system are then collected. No optimal sampling interval exists....... But if the total number of sample points N is fixed an optimal sampling interval exists. Then it is far worse to use a too large sampling interval than a too small one since the information losses increase rapidly when the sampling interval increases from the optimal value....
Energy Technology Data Exchange (ETDEWEB)
Passalia, Claudio; Alfano, Orlando M. [INTEC - Instituto de Desarrollo Tecnologico para la Industria Quimica, CONICET - UNL, Gueemes 3450, 3000 Santa Fe (Argentina); FICH - Departamento de Medio Ambiente, Facultad de Ingenieria y Ciencias Hidricas, Universidad Nacional del Litoral, Ciudad Universitaria, 3000 Santa Fe (Argentina); Brandi, Rodolfo J., E-mail: rbrandi@santafe-conicet.gov.ar [INTEC - Instituto de Desarrollo Tecnologico para la Industria Quimica, CONICET - UNL, Gueemes 3450, 3000 Santa Fe (Argentina); FICH - Departamento de Medio Ambiente, Facultad de Ingenieria y Ciencias Hidricas, Universidad Nacional del Litoral, Ciudad Universitaria, 3000 Santa Fe (Argentina)
2012-04-15
Highlights: Black-Right-Pointing-Pointer Indoor pollution control via photocatalytic reactors. Black-Right-Pointing-Pointer Scaling-up methodology based on previously determined mechanistic kinetics. Black-Right-Pointing-Pointer Radiation interchange model between catalytic walls using configuration factors. Black-Right-Pointing-Pointer Modeling and experimental validation of a complex geometry photocatalytic reactor. - Abstract: A methodology for modeling photocatalytic reactors for their application in indoor air pollution control is carried out. The methodology implies, firstly, the determination of intrinsic reaction kinetics for the removal of formaldehyde. This is achieved by means of a simple geometry, continuous reactor operating under kinetic control regime and steady state. The kinetic parameters were estimated from experimental data by means of a nonlinear optimization algorithm. The second step was the application of the obtained kinetic parameters to a very different photoreactor configuration. In this case, the reactor is a corrugated wall type using nanosize TiO{sub 2} as catalyst irradiated by UV lamps that provided a spatially uniform radiation field. The radiative transfer within the reactor was modeled through a superficial emission model for the lamps, the ray tracing method and the computation of view factors. The velocity and concentration fields were evaluated by means of a commercial CFD tool (Fluent 12) where the radiation model was introduced externally. The results of the model were compared experimentally in a corrugated wall, bench scale reactor constructed in the laboratory. The overall pollutant conversion showed good agreement between model predictions and experiments, with a root mean square error less than 4%.
Wollack, James A.; Bolt, Daniel M.; Cohen, Allan S.; Lee, Young-Sun
2002-01-01
Compared the quality of item parameter estimates for marginal maximum likelihood (MML) and Markov Chain Monte Carlo (MCMC) with the nominal response model using simulation. The quality of item parameter recovery was nearly identical for MML and MCMC, and both methods tended to produce good estimates. (SLD)
Wu, Xinrong; Zhang, Shaoqing; Liu, Zhengyu; Rosati, Anthony; Delworth, Thomas L.
2013-04-01
Observational information has a strong geographic dependence that may directly influence the quality of parameter estimation in a coupled climate system. Using an intermediate atmosphere-ocean-land coupled model, the impact of geographic dependent observing system on parameter estimation is explored within a "twin" experiment framework. The "observations" produced by a "truth" model are assimilated into an assimilation model in which the most sensitive model parameter has a different geographic structure from the "truth", for retrieving the "truth" geographic structure of the parameter. To examine the influence of data-sparse areas on parameter estimation, the twin experiment is also performed with an observing system in which the observations in some area are removed. Results show that traditional single-valued parameter estimation (SPE) attains a global mean of the "truth", while geographic dependent parameter optimization (GPO) can retrieve the "truth" structure of the parameter and therefore significantly improves estimated states and model predictability. This is especially true when an observing system with data-void areas is applied, where the error of state estimate is reduced by 31 % and the corresponding forecast skill is doubled by GPO compared with SPE.
Directory of Open Access Journals (Sweden)
Shifei Yuan
2015-07-01
Full Text Available Accurate estimation of model parameters and state of charge (SoC is crucial for the lithium-ion battery management system (BMS. In this paper, the stability of the model parameters and SoC estimation under measurement uncertainty is evaluated by three different factors: (i sampling periods of 1/0.5/0.1 s; (ii current sensor precisions of ±5/±50/±500 mA; and (iii voltage sensor precisions of ±1/±2.5/±5 mV. Firstly, the numerical model stability analysis and parametric sensitivity analysis for battery model parameters are conducted under sampling frequency of 1–50 Hz. The perturbation analysis is theoretically performed of current/voltage measurement uncertainty on model parameter variation. Secondly, the impact of three different factors on the model parameters and SoC estimation was evaluated with the federal urban driving sequence (FUDS profile. The bias correction recursive least square (CRLS and adaptive extended Kalman filter (AEKF algorithm were adopted to estimate the model parameters and SoC jointly. Finally, the simulation results were compared and some insightful findings were concluded. For the given battery model and parameter estimation algorithm, the sampling period, and current/voltage sampling accuracy presented a non-negligible effect on the estimation results of model parameters. This research revealed the influence of the measurement uncertainty on the model parameter estimation, which will provide the guidelines to select a reasonable sampling period and the current/voltage sensor sampling precisions in engineering applications.
Donato, David I.
2013-01-01
A specialized technique is used to compute weighted ordinary least-squares (OLS) estimates of the parameters of the National Descriptive Model of Mercury in Fish (NDMMF) in less time using less computer memory than general methods. The characteristics of the NDMMF allow the two products X'X and X'y in the normal equations to be filled out in a second or two of computer time during a single pass through the N data observations. As a result, the matrix X does not have to be stored in computer memory and the computationally expensive matrix multiplications generally required to produce X'X and X'y do not have to be carried out. The normal equations may then be solved to determine the best-fit parameters in the OLS sense. The computational solution based on this specialized technique requires O(8p2+16p) bytes of computer memory for p parameters on a machine with 8-byte double-precision numbers. This publication includes a reference implementation of this technique and a Gaussian-elimination solver in preliminary custom software.
Hydrologic Modeling and Parameter Estimation under Data Scarcity for Java Island, Indonesia
Yanto, M.; Livneh, B.; Rajagopalan, B.; Kasprzyk, J. R.
2015-12-01
The Indonesian island of Java is routinely subjected to intense flooding, drought and related natural hazards, resulting in severe social and economic impacts. Although an improved understanding of the island's hydrology would help mitigate these risks, data scarcity issues make the modeling challenging. To this end, we developed a hydrological representation of Java using the Variable Infiltration Capacity (VIC) model, to simulate the hydrologic processes of several watersheds across the island. We measured the model performance using Nash-Sutcliffe Efficiency (NSE) at monthly time step. Data scarcity and quality issues for precipitation and streamflow warranted the application of a quality control procedure to data ensure consistency among watersheds resulting in 7 watersheds. To optimize the model performance, the calibration parameters were estimated using Borg Multi Objective Evolutionary Algorithm (Borg MOEA), which offers efficient searching of the parameter space, adaptive population sizing and local optima escape facility. The result shows that calibration performance is best (NSE ~ 0.6 - 0.9) in the eastern part of the domain and moderate (NSE ~ 0.3 - 0.5) in the western part of the island. The validation results are lower (NSE ~ 0.1 - 0.5) and (NSE ~ 0.1 - 0.4) in the east and west, respectively. We surmise that the presence of outliers and stark differences in the climate between calibration and validation periods in the western watersheds are responsible for low NSE in this region. In addition, we found that approximately 70% of total errors were contributed by less than 20% of total data. The spatial variability of model performance suggests the influence of both topographical and hydroclimatic controls on the hydrological processes. Most watersheds in eastern part perform better in wet season and vice versa for the western part. This modeling framework is one of the first attempts at comprehensively simulating the hydrology in this maritime, tropical
Inflation and cosmological parameter estimation
Energy Technology Data Exchange (ETDEWEB)
Hamann, J.
2007-05-15
In this work, we focus on two aspects of cosmological data analysis: inference of parameter values and the search for new effects in the inflationary sector. Constraints on cosmological parameters are commonly derived under the assumption of a minimal model. We point out that this procedure systematically underestimates errors and possibly biases estimates, due to overly restrictive assumptions. In a more conservative approach, we analyse cosmological data using a more general eleven-parameter model. We find that regions of the parameter space that were previously thought ruled out are still compatible with the data; the bounds on individual parameters are relaxed by up to a factor of two, compared to the results for the minimal six-parameter model. Moreover, we analyse a class of inflation models, in which the slow roll conditions are briefly violated, due to a step in the potential. We show that the presence of a step generically leads to an oscillating spectrum and perform a fit to CMB and galaxy clustering data. We do not find conclusive evidence for a step in the potential and derive strong bounds on quantities that parameterise the step. (orig.)
Casabianca, Jodi M.; Lewis, Charles
2015-01-01
Loglinear smoothing (LLS) estimates the latent trait distribution while making fewer assumptions about its form and maintaining parsimony, thus leading to more precise item response theory (IRT) item parameter estimates than standard marginal maximum likelihood (MML). This article provides the expectation-maximization algorithm for MML estimation…
Kieftenbeld, Vincent; Natesan, Prathiba
2012-01-01
Markov chain Monte Carlo (MCMC) methods enable a fully Bayesian approach to parameter estimation of item response models. In this simulation study, the authors compared the recovery of graded response model parameters using marginal maximum likelihood (MML) and Gibbs sampling (MCMC) under various latent trait distributions, test lengths, and…
Juárez, TomásMaríaSilvina; de, LabandaElenaBru; de, RuizHolgadoAidaPesce; Nader-Macías, María Elena
2002-01-01
Lactobacilli are widely described as probiotic microorganisms used to restore the ecological balance of different animal or human tracts. For their use as probiotics, bacteria must show certain characteristics or properties related to the ability of adherence to mucosae or epithelia or show inhibition against pathogenic microorganisms. It is of primary interest to obtain the highest biomass and viability of the selected microorganisms. In this report, the growth of seven vaginal lactobacilli strains in four different growth media and at several inoculum percentages was compared, and the values of growth parameters (lag phase time, maximum growth rate, maximum optical density) were obtained by applying the Gompertz model to the experimental data. The application and estimation of this model is discussed, and the evaluation of the growth parameters is analyzed to compare the growth conditions of lactobacilli. Thus, these results in lab experiments provide a basis for testing different culture conditions to determine the best conditions in which to grow the probiotic lactobacilli for technological applications.
Briseño, Jessica; Herrera, Graciela S.
2010-05-01
Herrera (1998) proposed a method for the optimal design of groundwater quality monitoring networks that involves space and time in a combined form. The method was applied later by Herrera et al (2001) and by Herrera and Pinder (2005). To get the estimates of the contaminant concentration being analyzed, this method uses a space-time ensemble Kalman filter, based on a stochastic flow and transport model. When the method is applied, it is important that the characteristics of the stochastic model be congruent with field data, but, in general, it is laborious to manually achieve a good match between them. For this reason, the main objective of this work is to extend the space-time ensemble Kalman filter proposed by Herrera, to estimate the hydraulic conductivity, together with hydraulic head and contaminant concentration, and its application in a synthetic example. The method has three steps: 1) Given the mean and the semivariogram of the natural logarithm of hydraulic conductivity (ln K), random realizations of this parameter are obtained through two alternatives: Gaussian simulation (SGSim) and Latin Hypercube Sampling method (LHC). 2) The stochastic model is used to produce hydraulic head (h) and contaminant (C) realizations, for each one of the conductivity realizations. With these realization the mean of ln K, h and C are obtained, for h and C, the mean is calculated in space and time, and also the cross covariance matrix h-ln K-C in space and time. The covariance matrix is obtained averaging products of the ln K, h and C realizations on the estimation points and times, and the positions and times with data of the analyzed variables. The estimation points are the positions at which estimates of ln K, h or C are gathered. In an analogous way, the estimation times are those at which estimates of any of the three variables are gathered. 3) Finally the ln K, h and C estimate are obtained using the space-time ensemble Kalman filter. The realization mean for each one
Parameter Estimation Using VLA Data
Venter, Willem C.
The main objective of this dissertation is to extract parameters from multiple wavelength images, on a pixel-to-pixel basis, when the images are corrupted with noise and a point spread function. The data used are from the field of radio astronomy. The very large array (VLA) at Socorro in New Mexico was used to observe planetary nebula NGC 7027 at three different wavelengths, 2 cm, 6 cm and 20 cm. A temperature model, describing the temperature variation in the nebula as a function of optical depth, is postulated. Mathematical expressions for the brightness distribution (flux density) of the nebula, at the three observed wavelengths, are obtained. Using these three equations and the three data values available, one from the observed flux density map at each wavelength, it is possible to solve for two temperature parameters and one optical depth parameter at each pixel location. Due to the fact that the number of unknowns equal the number of equations available, estimation theory cannot be used to smooth any noise present in the data values. It was found that a direct solution of the three highly nonlinear flux density equations is very sensitive to noise in the data. Results obtained from solving for the three unknown parameters directly, as discussed above, were not physical realizable. This was partly due to the effect of incomplete sampling at the time when the data were gathered and to noise in the system. The application of rigorous digital parameter estimation techniques result in estimated parameters that are also not physically realizable. The estimated values for the temperature parameters are for example either too high or negative, which is not physically possible. Simulation studies have shown that a "double smoothing" technique improves the results by a large margin. This technique consists of two parts: in the first part the original observed data are smoothed using a running window and in the second part a similar smoothing of the estimated parameters
DEFF Research Database (Denmark)
Chon, K H; Hoyer, D; Armoundas, A A;
1999-01-01
part of the stochastic ARMA model are first estimated via a three-layer artificial neural network (deterministic estimation step) and then reestimated using the prediction error as one of the inputs to the artificial neural networks in an iterative algorithm (stochastic estimation step). The prediction...
Modeling the vertical soil organic matter profile using Bayesian parameter estimation
Directory of Open Access Journals (Sweden)
M. C. Braakhekke
2013-01-01
Full Text Available The vertical distribution of soil organic matter (SOM in the profile may constitute an important factor for soil carbon cycling. However, the formation of the SOM profile is currently poorly understood due to equifinality, caused by the entanglement of several processes: input from roots, mixing due to bioturbation, and organic matter leaching. In this study we quantified the contribution of these three processes using Bayesian parameter estimation for the mechanistic SOM profile model SOMPROF. Based on organic carbon measurements, 13 parameters related to decomposition and transport of organic matter were estimated for two temperate forest soils: an Arenosol with a mor humus form (Loobos, the Netherlands, and a Cambisol with mull-type humus (Hainich, Germany. Furthermore, the use of the radioisotope ^{210}Pb_{ex} as tracer for vertical SOM transport was studied. For Loobos, the calibration results demonstrate the importance of organic matter transport with the liquid phase for shaping the vertical SOM profile, while the effects of bioturbation are generally negligible. These results are in good agreement with expectations given in situ conditions. For Hainich, the calibration offered three distinct explanations for the observations (three modes in the posterior distribution. With the addition of ^{210}Pb_{ex} data and prior knowledge, as well as additional information about in situ conditions, we were able to identify the most likely explanation, which indicated that root litter input is a dominant process for the SOM profile. For both sites the organic matter appears to comprise mainly adsorbed but potentially leachable material, pointing to the importance of organo-mineral interactions. Furthermore, organic matter in the mineral soil appears to be mainly derived from root litter, supporting previous studies that highlighted the importance of root input for soil carbon sequestration. The ^{210 }
Modeling the vertical soil organic matter profile using Bayesian parameter estimation
Directory of Open Access Journals (Sweden)
M. C. Braakhekke
2012-08-01
Full Text Available The vertical distribution of soil organic matter (SOM in the profile may constitute a significant factor for soil carbon cycling. However, the formation of the SOM profile is currently poorly understood due to equifinality, caused by the entanglement of several processes: input from roots, mixing due to bioturbation, and organic matter leaching. In this study we quantified the contribution of these three processes using Bayesian parameter estimation for the mechanistic SOM profile model SOMPROF. Based on organic carbon measurements, 13 parameters related to decomposition and transport of organic matter were estimated for two temperature forest soils: an Arenosol with a mor humus form (Loobos, The Netherlands, and a Cambisol with mull type humus (Hainich, Germany. Furthermore, the use of the radioisotope ^{210}Pb_{ex} as tracer for vertical SOM transport was studied.
For Loobos the calibration results demonstrate the importance of liquid phase transport for shaping the vertical SOM profile, while the effects of bioturbation are generally negligible. These results are in good agreement with expectations given in situ conditions. For Hainich the calibration offered three distinct explanations for the observations (three modes in the posterior distribution. With the addition of ^{210}Pb_{ex} data and prior knowledge, as well as additional information about in situ conditions, we were able to identify the most likely explanation, which identified root litter input as the dominant process for the SOM profile. For both sites the organic matter appears to comprise mainly adsorbed but potentially leachable material, pointing to the importance of organo-mineral interactions. Furthermore, organic matter in the mineral soil appears to be mainly derived from root litter, supporting previous studies that highlighted the importance of root input for soil carbon sequestration. The ^{210
}
Mente, Carsten; Prade, Ina; Brusch, Lutz; Breier, Georg; Deutsch, Andreas
2011-07-01
Lattice-gas cellular automata (LGCAs) can serve as stochastic mathematical models for collective behavior (e.g. pattern formation) emerging in populations of interacting cells. In this paper, a two-phase optimization algorithm for global parameter estimation in LGCA models is presented. In the first phase, local minima are identified through gradient-based optimization. Algorithmic differentiation is adopted to calculate the necessary gradient information. In the second phase, for global optimization of the parameter set, a multi-level single-linkage method is used. As an example, the parameter estimation algorithm is applied to a LGCA model for early in vitro angiogenic pattern formation.
Directory of Open Access Journals (Sweden)
B. Bisselink
2016-12-01
New hydrological insights: Results indicate large discrepancies in terms of the linear correlation (r, bias (β and variability (γ between the observed and simulated streamflows when using different precipitation estimates as model input. The best model performance was obtained with products which ingest gauge data for bias correction. However, catchment behavior was difficult to be captured using a single parameter set and to obtain a single robust parameter set for each catchment, which indicate that transposing model parameters should be carried out with caution. Model parameters depend on the precipitation characteristics of the calibration period and should therefore only be used in target periods with similar precipitation characteristics (wet/dry.
Ebrahimian, Hossein; Jalayer, Fatemeh
2017-08-29
In the immediate aftermath of a strong earthquake and in the presence of an ongoing aftershock sequence, scientific advisories in terms of seismicity forecasts play quite a crucial role in emergency decision-making and risk mitigation. Epidemic Type Aftershock Sequence (ETAS) models are frequently used for forecasting the spatio-temporal evolution of seismicity in the short-term. We propose robust forecasting of seismicity based on ETAS model, by exploiting the link between Bayesian inference and Markov Chain Monte Carlo Simulation. The methodology considers the uncertainty not only in the model parameters, conditioned on the available catalogue of events occurred before the forecasting interval, but also the uncertainty in the sequence of events that are going to happen during the forecasting interval. We demonstrate the methodology by retrospective early forecasting of seismicity associated with the 2016 Amatrice seismic sequence activities in central Italy. We provide robust spatio-temporal short-term seismicity forecasts with various time intervals in the first few days elapsed after each of the three main events within the sequence, which can predict the seismicity within plus/minus two standard deviations from the mean estimate within the few hours elapsed after the main event.
Energy Technology Data Exchange (ETDEWEB)
Kim, Joo Yeon; Lee, Seung Hyun; Park, Tai Jin [Korean Association for Radiation Application, Seoul (Korea, Republic of)
2016-06-15
Any real application of Bayesian inference must acknowledge that both prior distribution and likelihood function have only been specified as more or less convenient approximations to whatever the analyzer's true belief might be. If the inferences from the Bayesian analysis are to be trusted, it is important to determine that they are robust to such variations of prior and likelihood as might also be consistent with the analyzer's stated beliefs. The robust Bayesian inference was applied to atmospheric dispersion assessment using Gaussian plume model. The scopes of contaminations were specified as the uncertainties of distribution type and parametric variability. The probabilistic distribution of model parameters was assumed to be contaminated as the symmetric unimodal and unimodal distributions. The distribution of the sector-averaged relative concentrations was then calculated by applying the contaminated priors to the model parameters. The sector-averaged concentrations for stability class were compared by applying the symmetric unimodal and unimodal priors, respectively, as the contaminated one based on the class of ε-contamination. Though ε was assumed as 10%, the medians reflecting the symmetric unimodal priors were nearly approximated within 10% compared with ones reflecting the plausible ones. However, the medians reflecting the unimodal priors were approximated within 20% for a few downwind distances compared with ones reflecting the plausible ones. The robustness has been answered by estimating how the results of the Bayesian inferences are robust to reasonable variations of the plausible priors. From these robust inferences, it is reasonable to apply the symmetric unimodal priors for analyzing the robustness of the Bayesian inferences.
Peters-Lidard, Christa D.
2011-01-01
Center (EMC) for their land data assimilation systems to support weather and climate modeling. LIS not only consolidates the capabilities of these two systems, but also enables a much larger variety of configurations with respect to horizontal spatial resolution, input datasets and choice of land surface model through "plugins". LIS has been coupled to the Weather Research and Forecasting (WRF) model to support studies of land-atmosphere coupling be enabling ensembles of land surface states to be tested against multiple representations of the atmospheric boundary layer. LIS has also been demonstrated for parameter estimation, who showed that the use of sequential remotely sensed soil moisture products can be used to derive soil hydraulic and texture properties given a sufficient dynamic range in the soil moisture retrievals and accurate precipitation inputs.LIS has also recently been demonstrated for multi-model data assimilation using an Ensemble Kalman Filter for sequential assimilation of soil moisture, snow, and temperature.Ongoing work has demonstrated the value of bias correction as part of the filter, and also that of joint calibration and assimilation.Examples and case studies demonstrating the capabilities and impacts of LIS for hydrometeorological modeling, assimilation and parameter estimation will be presented as advancements towards the next generation of integrated observation and modeling systems
Sundar, Sriram; Dreyer, Jason T.; Singh, Rajendra
2016-12-01
A new cam-follower system experiment capable of generating periodic impacts is utilized to estimate the impact damping model parameters. The experiment is designed to precisely measure the forces and acceleration during impulsive events. The impact damping force is described as a product of a damping coefficient, the indentation displacement raised to the power of a damping index, and the time derivative of the indentation displacement. A novel time-domain based technique and a signal processing procedure are developed to accurately estimate the damping coefficient and index. The measurements are compared to the predictions from a corresponding contact mechanics model with trial values of damping parameters on the basis of a particular residue; both parameters are quantified based on the minimization of this residue. The estimated damping parameters are justified using the literature and an equivalent coefficient of restitution model is developed. Also, some unresolved issues regarding the impact damping model are addressed.
Parameter estimation of cutting tool temperature nonlinear model using PSO algorithm
Institute of Scientific and Technical Information of China (English)
无
2005-01-01
In cutting tool temperature experiment, a large number of related data could be available. In order to define the relationship among the experiment data, the nonlinear regressive curve of cutting tool temperature must be constructed based on the data. This paper proposes the Particle Swarm Optimization (PSO) algorithm for estimating the parameters such a curve. The PSO algorithm is an evolutional method based on a very simple concept. Comparison of PSO results with those of GA and LS methods showed that the PSO algorithm is more effective for estimating the parameters of the above curve.
Peckham, Scott D.; Kelbert, Anna; Hill, Mary C.; Hutton, Eric W. H.
2016-05-01
Component-based modeling frameworks make it easier for users to access, configure, couple, run and test numerical models. However, they do not typically provide tools for uncertainty quantification or data-based model verification and calibration. To better address these important issues, modeling frameworks should be integrated with existing, general-purpose toolkits for optimization, parameter estimation and uncertainty quantification. This paper identifies and then examines the key issues that must be addressed in order to make a component-based modeling framework interoperable with general-purpose packages for model analysis. As a motivating example, one of these packages, DAKOTA, is applied to a representative but nontrivial surface process problem of comparing two models for the longitudinal elevation profile of a river to observational data. Results from a new mathematical analysis of the resulting nonlinear least squares problem are given and then compared to results from several different optimization algorithms in DAKOTA.
Boesten, J.J.T.I.
2000-01-01
User-dependent subjectivity in the process of testing pesticide leaching models is relevant because it may result in wrong interpretation of model tests. About 20 modellers used the same data set to test pesticide leaching models (one or two models per modeller). The data set included laboratory stu
Boesten, J.J.T.I.
2000-01-01
User-dependent subjectivity in the process of testing pesticide leaching models is relevant because it may result in wrong interpretation of model tests. About 20 modellers used the same data set to test pesticide leaching models (one or two models per modeller). The data set included laboratory
Directory of Open Access Journals (Sweden)
Issa Ahmed Abed
2016-12-01
Full Text Available This paper present a method to enhance the firefly algorithm by coupling with a local search. The constructed technique is applied to identify the solar parameters model where the method has been proved its ability to obtain the photovoltaic parameters model. Standard firefly algorithm (FA, electromagnetism-like (EM algorithm, and electromagnetism-like without local (EMW search algorithm all are compared with the suggested method to test its capability to solve this model.
Distributed parameter estimation for NASA Mini-Mast truss using Timoshenko beam model
Shen, Ji-Yao; Huang, Jen-Kuang; Taylor, Lawrence W., Jr.
1991-01-01
A more accurate Timoshenko beam model is used to characterize the bending behavior of the truss. A maximum likelihood estimator for the Timoshenko beam model has been formulated. A closed-form solution of the Timoshenko beam equation, for a uniform cantilevered beam with two concentrated masses, is derived so that the procedure for the estimation of modal characteristics is much improved. The updated model to the NASA Mini-Mast test data is demonstrated.
Izsak, F.
2006-01-01
A numerical maximum likelihood (ML) estimation procedure is developed for the constrained parameters of multinomial distributions. The main dif��?culty involved in computing the likelihood function is the precise and fast determination of the multinomial coef��?cients. For this the coef��?cients are
A systematic study of Lyman-Alpha transfer through outflowing shells: Model parameter estimation
Gronke, Max; Dijkstra, Mark
2015-01-01
Outflows promote the escape of Lyman-$\\alpha$ (Ly$\\alpha$) photons from dusty interstellar media. The process of radiative transfer through interstellar outflows is often modelled by a spherically symmetric, geometrically thin shell of gas that scatters photons emitted by a central Ly$\\alpha$ source. Despite its simplified geometry, this `shell model' has been surprisingly successful at reproducing observed Ly$\\alpha$ line shapes. In this paper we perform automated line fitting on a set of noisy simulated shell model spectra, in order to determine whether degeneracies exist between the different shell model parameters. While there are some significant degeneracies, we find that most parameters are accurately recovered, especially the HI column density ($N_{\\rm HI}$) and outflow velocity ($v_{\\rm exp}$). This work represents an important first step in determining how the shell model parameters relate to the actual physical properties of Ly$\\alpha$ sources. To aid further exploration of the parameter space, we ...
Yuan, Chunhua; Wang, Jiang; Yi, Guosheng
2017-03-01
Estimation of ion channel parameters is crucial to spike initiation of neurons. The biophysical neuron models have numerous ion channel parameters, but only a few of them play key roles in the firing patterns of the models. So we choose three parameters featuring the adaptation in the Ermentrout neuron model to be estimated. However, the traditional particle swarm optimization (PSO) algorithm is still easy to fall into local optimum and has the premature convergence phenomenon in the study of some problems. In this paper, we propose an improved method that uses a concave function and dynamic logistic chaotic mapping mixed to adjust the inertia weights of the fitness value, effectively improve the global convergence ability of the algorithm. The perfect predicting firing trajectories of the rebuilt model using the estimated parameters prove that only estimating a few important ion channel parameters can establish the model well and the proposed algorithm is effective. Estimations using two classic PSO algorithms are also compared to the improved PSO to verify that the algorithm proposed in this paper can avoid local optimum and quickly converge to the optimal value. The results provide important theoretical foundations for building biologically realistic neuron models.
Jhorar, R.K.
2002-01-01
Key words: evapotranspiration, effective soil hydraulic parameters, remote sensing, regional water management, groundwater use, Bhakra Irrigation System, India.The meaningful application of water management simulation models at regional scale for the analysis of alternate water manage
Digital Repository Service at National Institute of Oceanography (India)
Chakraborty, B.; Kodagali, V.N.
In this paper, Helmholtz-Kirchhoff (H-K) roughness model is employed to characterize seafloor sediment and roughness parameters from the eastern sector of the Southern Oceans The multibeam- Hydroswcep system's angular-backscatter data, which...
Estimation of physical parameters in induction motors
DEFF Research Database (Denmark)
Børsting, H.; Knudsen, Morten; Rasmussen, Henrik
1994-01-01
Parameter estimation in induction motors is a field of great interest, because accurate models are needed for robust dynamic control of induction motors......Parameter estimation in induction motors is a field of great interest, because accurate models are needed for robust dynamic control of induction motors...
Leavesley, G.H.; Markstrom, S.L.; Restrepo, Pedro J.; Viger, R.J.
2002-01-01
A modular approach to model design and construction provides a flexible framework in which to focus the multidisciplinary research and operational efforts needed to facilitate the development, selection, and application of the most robust distributed modelling methods. A variety of modular approaches have been developed, but with little consideration for compatibility among systems and concepts. Several systems are proprietary, limiting any user interaction. The US Geological Survey modular modelling system (MMS) is a modular modelling framework that uses an open source software approach to enable all members of the scientific community to address collaboratively the many complex issues associated with the design, development, and application of distributed hydrological and environmental models. Implementation of a common modular concept is not a trivial task. However, it brings the resources of a larger community to bear on the problems of distributed modelling, provides a framework in which to compare alternative modelling approaches objectively, and provides a means of sharing the latest modelling advances. The concepts and components of the MMS are described and an example application of the MMS, in a decision-support system context, is presented to demonstrate current system capabilities. Copyright ?? 2002 John Wiley and Sons, Ltd.
DEFF Research Database (Denmark)
Ditlevsen, Susanne; Samson, Adeline
2016-01-01
evolution. One-dimensional models are the stochastic integrate-and-fire neuronal diffusion models. Biophysical neuronal models take into account the dynamics of ion channels or synaptic activity, leading to multidimensional diffusion models. Since only the membrane potential can be measured......Dynamics of the membrane potential in a single neuron can be studied by estimating biophysical parameters from intracellular recordings. Diffusion processes, given as continuous solutions to stochastic differential equations, are widely applied as models for the neuronal membrane potential...
PARAMETER ESTIMATION OF EXPONENTIAL DISTRIBUTION
Institute of Scientific and Technical Information of China (English)
XU Haiyan; FEI Heliang
2005-01-01
Because of the importance of grouped data, many scholars have been devoted to the study of this kind of data. But, few documents have been concerned with the threshold parameter. In this paper, we assume that the threshold parameter is smaller than the first observing point. Then, on the basis of the two-parameter exponential distribution, the maximum likelihood estimations of both parameters are given, the sufficient and necessary conditions for their existence and uniqueness are argued, and the asymptotic properties of the estimations are also presented, according to which approximate confidence intervals of the parameters are derived. At the same time, the estimation of the parameters is generalized, and some methods are introduced to get explicit expressions of these generalized estimations. Also, a special case where the first failure time of the units is observed is considered.
Estimation of Nutation Time Constant Model Parameters for On-Axis Spinning Spacecraft
Schlee, Keith; Sudermann, James
2008-01-01
Calculating an accurate nutation time constant for a spinning spacecraft is an important step for ensuring mission success. Spacecraft nutation is caused by energy dissipation about the spin axis. Propellant slosh in the spacecraft fuel tanks is the primary source for this dissipation and can be simulated using a forced motion spin table. Mechanical analogs, such as pendulums and rotors, are typically used to simulate propellant slosh. A strong desire exists for an automated method to determine these analog parameters. The method presented accomplishes this task by using a MATLAB Simulink/SimMechanics based simulation that utilizes the Parameter Estimation Tool.
Sheiner, L B; Beal, S L
1980-12-01
Individual pharmacokinetic par parameters quantify the pharmacokinetics of an individual, while population pharmacokinetic parameters quantify population mean kinetics, interindividual variability, and residual intraindividual variability plus measurement error. Individual pharmacokinetics are estimated by fitting individual data to a pharmacokinetic model. Population pharmacokinetic parameters are estimated either by fitting all individual's data together as though there was no individual kinetic differences (the naive pooled data approach), or by fitting each individual's data separately, and then combining the individual parameter estimates (the two-stage approach). A third approach, NONMEM, takes a middle course between these, and avoids shortcomings of each of them. A data set consisting of 124 steady-state phenytoin concentration-dosage pairs from 49 patients, obtained in the routine course of their therapy, was analyzed by each method. The resulting population parameter estimates differ considerably (population mean Km, for example, is estimated as 1.57, 5.36, and 4.44 micrograms/ml by the naive pooled data, two-stage, and NONMEN approaches, respectively). Simulations of the data were analyzed to investigate these differences. The simulations indicate that the pooled data approach fails to estimate variabilities and produces imprecise estimates of mean kinetics. The two-stage approach produces good estimates of mean kinetics, but biased and imprecise estimates of interindividual variability. NONMEN produces accurate and precise estimates of all parameters, and also reasonable confidence intervals for them. This performance is exactly what is expected from theoretical considerations and provides empirical support for the use of NONMEM when estimating population pharmacokinetics from routine type patient data.
White, J Wilson; Nickols, Kerry J; Malone, Daniel; Carr, Mark H; Starr, Richard M; Cordoleani, Flora; Baskett, Marissa L; Hastings, Alan; Botsford, Louis W
2016-12-01
Integral projection models (IPMs) have a number of advantages over matrix-model approaches for analyzing size-structured population dynamics, because the latter require parameter estimates for each age or stage transition. However, IPMs still require appropriate data. Typically they are parameterized using individual-scale relationships between body size and demographic rates, but these are not always available. We present an alternative approach for estimating demographic parameters from time series of size-structured survey data using a Bayesian state-space IPM (SSIPM). By fitting an IPM in a state-space framework, we estimate unknown parameters and explicitly account for process and measurement error in a dataset to estimate the underlying process model dynamics. We tested our method by fitting SSIPMs to simulated data; the model fit the simulated size distributions well and estimated unknown demographic parameters accurately. We then illustrated our method using nine years of annual surveys of the density and size distribution of two fish species (blue rockfish, Sebastes mystinus, and gopher rockfish, S. carnatus) at seven kelp forest sites in California. The SSIPM produced reasonable fits to the data, and estimated fishing rates for both species that were higher than our Bayesian prior estimates based on coast-wide stock assessment estimates of harvest. That improvement reinforces the value of being able to estimate demographic parameters from local-scale monitoring data. We highlight a number of key decision points in SSIPM development (e.g., open vs. closed demography, number of particles in the state-space filter) so that users can apply the method to their own datasets. © 2016 by the Ecological Society of America.
DEFF Research Database (Denmark)
Ditlevsen, Susanne; Yip, Kay-Pong; Holstein-Rathlou, N.-H.
2005-01-01
A key parameter in the understanding of renal hemodynamics is the gain of the feedback function in the tubuloglomerular feedback mechanism. A dynamic model of autoregulation of renal blood flow and glomerular filtration rate has been extended to include a stochastic differential equations model o...
Spatial scale effects on model parameter estimation and predictive uncertainty in ungauged basins
CSIR Research Space (South Africa)
Hughes, DA
2013-06-01
Full Text Available The most appropriate scale to use for hydrological modelling depends on the structure of the chosen model, the purpose of the results and the resolution of the available data used to quantify parameter values and provide the climatic forcing data...
BAYESIAN PARAMETER ESTIMATION IN A MIXED-ORDER MODEL OF BOD DECAY. (U915590)
We describe a generalized version of the BOD decay model in which the reaction is allowed to assume an order other than one. This is accomplished by making the exponent on BOD concentration a free parameter to be determined by the data. This "mixed-order" model may be ...
Markov chain Monte Carlo approach to parameter estimation in the FitzHugh-Nagumo model
DEFF Research Database (Denmark)
Jensen, Anders Christian; Ditlevsen, Susanne; Kessler, Mathieu
2012-01-01
Excitability is observed in a variety of natural systems, such as neuronal dynamics, cardiovascular tissues, or climate dynamics. The stochastic FitzHugh-Nagumo model is a prominent example representing an excitable system. To validate the practical use of a model, the first step is to estimate m...
Stegeman, Alwin
2016-01-01
In the common factor model the observed data is conceptually split into a common covariance producing part and an uncorrelated unique part. The common factor model is fitted to the data itself and a new method is introduced for the simultaneous estimation of loadings, unique variances, factor scores
Stegeman, Alwin
In the common factor model the observed data is conceptually split into a common covariance producing part and an uncorrelated unique part. The common factor model is fitted to the data itself and a new method is introduced for the simultaneous estimation of loadings, unique variances, factor
Institute of Scientific and Technical Information of China (English)
吴启光; 杨国庆
2002-01-01
In this paper, we study the existence of the uniformly minimum risk equivariant (UMRE) estimators of parameters in a class of normal linear models, which include the normal variance components model,the growth curve model, the extended growth curve model, and the seemingly unrelated regression equations model, and so on. The necessary and sufficient conditions are given for the existence of UMRE estimators of the estimable linear functions of regression coefficients, the covariance matrix V and (trV)a, where a＞ 0is known, in the models under an affine group of transformations for quadratic losses and matrix losses, respectively. Under the (extended) growth curve model and the seemingly unrelated regression equations model,the conclusions given in literature for estimating regression coefficients can be derived by applying the general results in this paper, and the sufficient conditions for non-existence of UMRE estimators of V and tr(V) are expanded to be necessary and sufficient conditions. In addition, the necessary and sufficient conditions that there exist UMRE estimators of parameters in the variance components model are obtained for the first time.
A Novel Non-Iterative Method for Real-Time Parameter Estimation of the Fricke-Morse Model
Directory of Open Access Journals (Sweden)
SIMIC, M.
2016-11-01
Full Text Available Parameter estimation of Fricke-Morse model of biological tissue is widely used in bioimpedance data processing and analysis. Complex nonlinear least squares (CNLS data fitting is often used for parameter estimation of the model, but limitations such as high processing time, converging into local minimums, need for good initial guess of model parameters and non-convergence have been reported. Thus, there is strong motivation to develop methods which can solve these flaws. In this paper a novel real-time method for parameter estimation of Fricke-Morse model of biological cells is presented. The proposed method uses the value of characteristic frequency estimated from the measured imaginary part of bioimpedance, whereupon the Fricke-Morse model parameters are calculated using the provided analytical expressions. The proposed method is compared with CNLS in frequency ranges of 1 kHz to 10 MHz (beta-dispersion and 10 kHz to 100 kHz, which is more suitable for low-cost microcontroller-based bioimpedance measurement systems. The obtained results are promising, and in both frequency ranges, CNLS and the proposed method have accuracies suitable for most electrical bioimpedance (EBI applications. However, the proposed algorithm has significantly lower computation complexity, so it was 20-80 times faster than CNLS.
Vergara, Humberto; Kirstetter, Pierre-Emmanuel; Gourley, Jonathan J.; Flamig, Zachary L.; Hong, Yang; Arthur, Ami; Kolar, Randall
2016-10-01
This study presents a methodology for the estimation of a-priori parameters of the widely used kinematic wave approximation to the unsteady, 1-D Saint-Venant equations for hydrologic flow routing. The approach is based on a multi-dimensional statistical modeling of the macro scale spatial variability of rating curve parameters using a set of geophysical factors including geomorphology, hydro-climatology and land cover/land use over the Conterminous United States. The main goal of this study was to enable prediction at ungauged locations through regionalization of model parameters. The results highlight the importance of regional and local geophysical factors in uniquely defining characteristics of each stream reach conforming to physical theory of fluvial hydraulics. The application of the estimates is demonstrated through a hydrologic modeling evaluation of a deterministic forecasting system performed on 1672 gauged basins and 47,563 events extracted from a 10-year simulation. Considering the mean concentration time of the basins of the study and the target application on flash flood forecasting, the skill of the flow routing simulations is significantly high for peakflow and timing of peakflow estimation, and shows consistency as indicated by the large sample verification. The resulting a-priori estimates can be used in any hydrologic model that employs the kinematic wave model for flow routing. Furthermore, probabilistic estimates of kinematic wave parameters are enabled based on uncertainty information that is generated during the multi-dimensional statistical modeling. More importantly, the methodology presented in this study enables the estimation of the kinematic wave model parameters anywhere over the globe, thus allowing flood modeling in ungauged basins at regional to global scales.
Estimation of parameters in a distributed precipitation-runoff model for Norway
Directory of Open Access Journals (Sweden)
S. Beldring
2003-01-01
Full Text Available A distributed version of the HBV-model using 1 km2 grid cells and daily time step was used to simulate runoff from the entire land surface of Norway for the period 1961-1990. The model was sensitive to changes in small scale properties of the land surface and the climatic input data, through explicit representation of differences between model elements, and by implicit consideration of sub-grid variations in moisture status. A geographically transferable set of model parameters was determined by a multi-criteria calibration strategy, which simultaneously minimised the residuals between model simulated and observed runoff from 141 Norwegian catchments located in areas with different runoff regimes and landscape characteristics. Model discretisation units with identical landscape classification were assigned similar parameter values. Model performance was evaluated by simulating discharge from 43 independent catchments. Finally, a river routing procedure using a kinematic wave approximation to open channel flow was introduced in the model, and discharges from three additional catchments were calculated and compared with observations. The model was used to produce a map of average annual runoff for Norway for the period 1961-1990. Keywords: distributed model, multi-criteria calibration, global parameters, ungauged catchments.
Directory of Open Access Journals (Sweden)
T. Krauße
2012-02-01
Full Text Available The development of methods for estimating the parameters of hydrologic models considering uncertainties has been of high interest in hydrologic research over the last years. In particular methods which understand the estimation of hydrologic model parameters as a geometric search of a set of robust performing parameter vectors by application of the concept of data depth found growing research interest. Bárdossy and Singh (2008 presented a first Robust Parameter Estimation Method (ROPE and applied it for the calibration of a conceptual rainfall-runoff model with daily time step. The basic idea of this algorithm is to identify a set of model parameter vectors with high model performance called good parameters and subsequently generate a set of parameter vectors with high data depth with respect to the first set. Both steps are repeated iteratively until a stopping criterion is met. The results estimated in this case study show the high potential of the principle of data depth to be used for the estimation of hydrologic model parameters. In this paper we present some further developments that address the most important shortcomings of the original ROPE approach. We developed a stratified depth based sampling approach that improves the sampling from non-elliptic and multi-modal distributions. It provides a higher efficiency for the sampling of deep points in parameter spaces with higher dimensionality. Another modification addresses the problem of a too strong shrinking of the estimated set of robust parameter vectors that might lead to overfitting for model calibration with a small amount of calibration data. This contradicts the principle of robustness. Therefore, we suggest to split the available calibration data into two sets and use one set to control the overfitting. All modifications were implemented into a further developed ROPE approach that is called Advanced Robust Parameter Estimation (AROPE. However, in this approach the estimation of
Parameters estimation in quantum optics
D'Ariano, G M; Sacchi, M F; Paris, Matteo G. A.; Sacchi, Massimiliano F.
2000-01-01
We address several estimation problems in quantum optics by means of the maximum-likelihood principle. We consider Gaussian state estimation and the determination of the coupling parameters of quadratic Hamiltonians. Moreover, we analyze different schemes of phase-shift estimation. Finally, the absolute estimation of the quantum efficiency of both linear and avalanche photodetectors is studied. In all the considered applications, the Gaussian bound on statistical errors is attained with a few thousand data.
Directory of Open Access Journals (Sweden)
T. Krauße
2011-03-01
Full Text Available The development of methods for estimating the parameters of hydrological models considering uncertainties has been of high interest in hydrological research over the last years. In particular methods which understand the estimation of hydrological model parameters as a geometric search of a set of robust performing parameter vectors by application of the concept of data depth found growing research interest. Bárdossy and Singh (2008 presented a first proposal and applied it for the calibration of a conceptual rainfall-runoff model with daily time step. Krauße and Cullmann (2011 further developed this method and applied it in a case study to calibrate a process oriented hydrological model with hourly time step focussing on flood events in a fast responding catchment. The results of both studies showed the potential of the application of the principle of data depth. However, also the weak point of the presented approach got obvious. The algorithm identifies a set of model parameter vectors with high model performance and subsequently generates a set of parameter vectors with high data depth with respect to the first set. These both steps are repeated iteratively until a stopping criterion is met. In the first step the estimation of the good parameter vectors is based on the Monte Carlo method. The major shortcoming of this method is that it is strongly dependent on a high number of samples exponentially growing with the dimensionality of the problem. In this paper we present another robust parameter estimation strategy which applies an approved search strategy for high-dimensional parameter spaces, the particle swarm optimisation in order to identify a set of good parameter vectors with given uncertainty bounds. The generation of deep parameters is according to Krauße and Cullmann (2011. The method was compared to the Monte Carlo based robust parameter estimation algorithm on the example of a case study in Krauße and Cullmann (2011 to
Revisiting a model of ontogenetic growth: estimating model parameters from theory and data.
Moses, Melanie E; Hou, Chen; Woodruff, William H; West, Geoffrey B; Nekola, Jeffery C; Zuo, Wenyun; Brown, James H
2008-05-01
The ontogenetic growth model (OGM) of West et al. provides a general description of how metabolic energy is allocated between production of new biomass and maintenance of existing biomass during ontogeny. Here, we reexamine the OGM, make some minor modifications and corrections, and further evaluate its ability to account for empirical variation on rates of metabolism and biomass in vertebrates both during ontogeny and across species of varying adult body size. We show that the updated version of the model is internally consistent and is consistent with other predictions of metabolic scaling theory and empirical data. The OGM predicts not only the near universal sigmoidal form of growth curves but also the M(1/4) scaling of the characteristic times of ontogenetic stages in addition to the curvilinear decline in growth efficiency described by Brody. Additionally, the OGM relates the M(3/4) scaling across adults of different species to the scaling of metabolic rate across ontogeny within species. In providing a simple, quantitative description of how energy is allocated to growth, the OGM calls attention to unexplained variation, unanswered questions, and opportunities for future research.
Directory of Open Access Journals (Sweden)
Schook Lawrence B
2000-07-01
Full Text Available Abstract A strategy of multi-step minimal conditional regression analysis has been developed to determine the existence of statistical testing and parameter estimation for a quantitative trait locus (QTL that are unaffected by linked QTLs. The estimation of marker-QTL recombination frequency needs to consider only three cases: 1 the chromosome has only one QTL, 2 one side of the target QTL has one or more QTLs, and 3 either side of the target QTL has one or more QTLs. Analytical formula was derived to estimate marker-QTL recombination frequency for each of the three cases. The formula involves two flanking markers for case 1, two flanking markers plus a conditional marker for case 2, and two flanking markers plus two conditional markers for case 3. Each QTL variance and effect, and the total QTL variance were also estimated using analytical formulae. Simulation data show that the formulae for estimating marker-QTL recombination frequency could be a useful statistical tool for fine QTL mapping. With 1 000 observations, a QTL could be mapped to a narrow chromosome region of 1.5 cM if no linked QTL is present, and to a 2.8 cM chromosome region if either side of the target QTL has at least one linked QTL.
Directory of Open Access Journals (Sweden)
Eaglen Sophie A E
2012-07-01
Full Text Available Abstract Background The focus in dairy cattle breeding is gradually shifting from production to functional traits and genetic parameters of calving traits are estimated more frequently. However, across countries, various statistical models are used to estimate these parameters. This study evaluates different models for calving ease and stillbirth in United Kingdom Holstein-Friesian cattle. Methods Data from first and later parity records were used. Genetic parameters for calving ease, stillbirth and gestation length were estimated using the restricted maximum likelihood method, considering different models i.e. sire (−maternal grandsire, animal, univariate and bivariate models. Gestation length was fitted as a correlated indicator trait and, for all three traits, genetic correlations between first and later parities were estimated. Potential bias in estimates was avoided by acknowledging a possible environmental direct-maternal covariance. The total heritable variance was estimated for each trait to discuss its theoretical importance and practical value. Prediction error variances and accuracies were calculated to compare the models. Results and discussion On average, direct and maternal heritabilities for calving traits were low, except for direct gestation length. Calving ease in first parity had a significant and negative direct-maternal genetic correlation. Gestation length was maternally correlated to stillbirth in first parity and directly correlated to calving ease in later parities. Multi-trait models had a slightly greater predictive ability than univariate models, especially for the lowly heritable traits. The computation time needed for sire (−maternal grandsire models was much smaller than for animal models with only small differences in accuracy. The sire (−maternal grandsire model was robust when additional genetic components were estimated, while the equivalent animal model had difficulties reaching convergence. Conclusions
Gelleszun, Marlene; Kreye, Phillip; Meon, Günter
2017-10-01
We introduce the developed lexicographic calibration strategy to circumvent the imbalance between sophisticated hydrological models in combination with complex optimisation algorithms. The criteria for the evaluation of the approach were (i) robustness and transferability of the resulting parameters, (ii) goodness-of-fit criteria in calibration and validation and (iii) time-efficiency. An order of preference was determined prior to the calibration and the parameters were separated into groups for a stepwise calibration to reduce the search space. A comparison with the global optimisation method SCE-UA showed that only 6% of the calculation time was needed; the conditions total volume, seasonality and shape of the hydrograph were successfully achieved for the calibration and for the cross-validation periods. Furthermore, the parameter sets obtained by the lexicographic calibration strategy for different time periods were much more similar to each other than the parameters obtained by SCE-UA. Besides the similarities of the parameter sets, the goodness-of-fit criteria for the cross-validation were better for the lexicographic approach and the water balance components were also more similar. Thus, we concluded that the resulting parameters were more representative for the corresponding catchments and therefore more suitable for transferability. Time-efficient approximate methods were used to account for parameter uncertainty, confidence intervals and the stability of the solution in the optimum.
Bosquet, Laurent; Duchene, Antoine; Lecot, François; Dupont, Grégory; Leger, Luc
2006-05-01
The purpose of this study was to evaluate the validity of maximal velocity (Vmax) estimated from three-parameter systems models, and to compare the predictive value of two- and three-parameter models for the 800 m. Seventeen trained male subjects (VO2max=66.54+/-7.29 ml min(-1) kg(-1)) performed five randomly ordered constant velocity tests (CVT), a maximal velocity test (mean velocity over the last 10 m portion of a 40 m sprint) and a 800 m time trial (V 800 m). Five systems models (two three-parameter and three two-parameter) were used to compute V max (three-parameter models), critical velocity (CV), anaerobic running capacity (ARC) and V800m from times to exhaustion during CVT. Vmax estimates were significantly lower than (0.19Critical velocity (CV) alone explained 40-62% of the variance in V800m. Combining CV with other parameters of each model to produce a calculated V800m resulted in a clear improvement of this relationship (0.83
Cosmological parameter estimation using Particle Swarm Optimization
Prasad, J.; Souradeep, T.
2014-03-01
Constraining parameters of a theoretical model from observational data is an important exercise in cosmology. There are many theoretically motivated models, which demand greater number of cosmological parameters than the standard model of cosmology uses, and make the problem of parameter estimation challenging. It is a common practice to employ Bayesian formalism for parameter estimation for which, in general, likelihood surface is probed. For the standard cosmological model with six parameters, likelihood surface is quite smooth and does not have local maxima, and sampling based methods like Markov Chain Monte Carlo (MCMC) method are quite successful. However, when there are a large number of parameters or the likelihood surface is not smooth, other methods may be more effective. In this paper, we have demonstrated application of another method inspired from artificial intelligence, called Particle Swarm Optimization (PSO) for estimating cosmological parameters from Cosmic Microwave Background (CMB) data taken from the WMAP satellite.
El Gharamti, Mohamad
2015-11-26
The ensemble Kalman filter (EnKF) recursively integrates field data into simulation models to obtain a better characterization of the model’s state and parameters. These are generally estimated following a state-parameters joint augmentation strategy. In this study, we introduce a new smoothing-based joint EnKF scheme, in which we introduce a one-step-ahead smoothing of the state before updating the parameters. Numerical experiments are performed with a two-dimensional synthetic subsurface contaminant transport model. The improved performance of the proposed joint EnKF scheme compared to the standard joint EnKF compensates for the modest increase in the computational cost.
DEFF Research Database (Denmark)
Chon, K H; Cohen, R J; Holstein-Rathlou, N H
1997-01-01
A linear and nonlinear autoregressive moving average (ARMA) identification algorithm is developed for modeling time series data. The algorithm uses Laguerre expansion of kernals (LEK) to estimate Volterra-Wiener kernals. However, instead of estimating linear and nonlinear system dynamics via moving...... average models, as is the case for the Volterra-Wiener analysis, we propose an ARMA model-based approach. The proposed algorithm is essentially the same as LEK, but this algorithm is extended to include past values of the output as well. Thus, all of the advantages associated with using the Laguerre...... function remain with our algorithm; but, by extending the algorithm to the linear and nonlinear ARMA model, a significant reduction in the number of Laguerre functions can be made, compared with the Volterra-Wiener approach. This translates into a more compact system representation and makes...
Estimation parameters and black box model of a brushless DC motor
Directory of Open Access Journals (Sweden)
José A. Becerra-Vargas
2014-08-01
Full Text Available The modeling of a process or a plant is vital for the design of its control system, since it allows predicting its dynamic and behavior under different circumstances, inputs, disturbances and noise. The main objective of this work is to identify which model is best for a permanent magnet brushless DC specific motor. For this, the mathematical model of a DC motor brushless PW16D, manufactured by Golden Motor, is presented and compared with its black box model; both are derived from experimental data. These data, the average applied voltage and the angular velocity, are acquired by a data acquisition card and imported to the computer. The constants of the mathematical model are estimated by a curve fitting algorithm based on non-linear least squares and pattern search using computational tool. To estimate the mathematical model constants by non-linear least square and search pattern, a goodness of fit of 84.88% and 80.48% respectively was obtained. The goodness of fit obtained by the black box model was 87.72%. The mathematical model presented slightly lower goodness of fit, but allowed to analyze the behavior of variables of interest such as the power consumption and the torque applied to the motor. Because of this, it is concluded that the mathematical model obtained by experimental data of the brushless motor PW16D, is better than its black box model.
Tillaart, van den S.P.M.; Booij, M.J.; Krol, M.S
2013-01-01
Uncertainties in discharge determination may have serious consequences for hydrological modelling and resulting discharge predictions used for flood forecasting, climate change impact assessment and reservoir operation. The aim of this study is to quantify the effect of discharge errors on parameter
DEFF Research Database (Denmark)
Frutiger, Jerome; Abildskov, Jens; Sin, Gürkan
2015-01-01
Flammability data is needed to assess the risk of fire and explosions. This study presents a new group contribution (GC) model to predict the upper flammability limit UFL oforganic chemicals. Furthermore, it provides a systematic method for outlier treatment inorder to improve the parameter...
Parameter Estimation of Inverter and Motor Model at Standstill using Measured Currents Only
DEFF Research Database (Denmark)
Rasmussen, Henrik; Knudsen, Morten; Tønnes, M.
1996-01-01
Methods for estimation of the parameters in the electrical equivalent diagram for the induction motor, based on special designed experiments, are given. In all experriments two of the three phases are given the same potential, i.e., no net torque is generatedand the motor is at standstill. Input...... to the system is the reference values for the stator voltages given as duty cycles for the Pulse With Modulated power device. The system output is the measured stator currents. Three experiments are describedgiving respectively 1) the stator resistance and inverter parameters, 2) the stator transient inductance...... and 3) the referred rotor rotor resistance and magnetizing inductance. The method developed in the two last experiments is independent of the inverter nonlinearity. New methods for system identification concerning saturation of the magnetic flux are given and a reference value for the flux level...
Estimation of parameters in a distributed precipitation-runoff model for Norway
Beldring, Stein; Engeland, Kolbjørn; Roald, Lars A.; Roar Sælthun, Nils; Voksø, Astrid
A distributed version of the HBV-model using 1 km2 grid cells and daily time step was used to simulate runoff from the entire land surface of Norway for the period 1961-1990. The model was sensitive to changes in small scale properties of the land surface and the climatic input data, through explicit representation of differences between model elements, and by implicit consideration of sub-grid variations in moisture status. A geographically transferable set of model parameters was determined by a multi-criteria calibration strategy, which simultaneously minimised the residuals between model simulated and observed runoff from 141 Norwegian catchments located in areas with different runoff regimes and landscape characteristics. Model discretisation units with identical landscape classification were assigned similar parameter values. Model performance was evaluated by simulating discharge from 43 independent catchments. Finally, a river routing procedure using a kinematic wave approximation to open channel flow was introduced in the model, and discharges from three additional catchments were calculated and compared with observations. The model was used to produce a map of average annual runoff for Norway for the period 1961-1990.
Institute of Scientific and Technical Information of China (English)
WANG Li-feng(王丽凤); MA Li(马丽); David Vere-Jones; CHEN Shi-jun(陈时军)
2004-01-01
Based on the stochastic AMR model, this paper constructs man-made earthquake catalogues to investigate the property of parameter estimation of the model. Then the stochastic AMR model is applied to the study of several strong earthquakes in China and New Zealand. Akaike′s AIC criterion is used to discriminate whether an accelerating mode of earthquake activity precedes those events or not. Finally, regional accelerating seismic activity and possible prediction approach for future strong earthquakes are discussed.
Cho, C I; Alam, M; Choi, T J; Choy, Y H; Choi, J G; Lee, S S; Cho, K H
2016-05-01
The objectives of the study were to estimate genetic parameters for milk production traits of Holstein cattle using random regression models (RRMs), and to compare the goodness of fit of various RRMs with homogeneous and heterogeneous residual variances. A total of 126,980 test-day milk production records of the first parity Holstein cows between 2007 and 2014 from the Dairy Cattle Improvement Center of National Agricultural Cooperative Federation in South Korea were used. These records included milk yield (MILK), fat yield (FAT), protein yield (PROT), and solids-not-fat yield (SNF). The statistical models included random effects of genetic and permanent environments using Legendre polynomials (LP) of the third to fifth order (L3-L5), fixed effects of herd-test day, year-season at calving, and a fixed regression for the test-day record (third to fifth order). The residual variances in the models were either homogeneous (HOM) or heterogeneous (15 classes, HET15; 60 classes, HET60). A total of nine models (3 orders of polynomials×3 types of residual variance) including L3-HOM, L3-HET15, L3-HET60, L4-HOM, L4-HET15, L4-HET60, L5-HOM, L5-HET15, and L5-HET60 were compared using Akaike information criteria (AIC) and/or Schwarz Bayesian information criteria (BIC) statistics to identify the model(s) of best fit for their respective traits. The lowest BIC value was observed for the models L5-HET15 (MILK; PROT; SNF) and L4-HET15 (FAT), which fit the best. In general, the BIC values of HET15 models for a particular polynomial order was lower than that of the HET60 model in most cases. This implies that the orders of LP and types of residual variances affect the goodness of models. Also, the heterogeneity of residual variances should be considered for the test-day analysis. The heritability estimates of from the best fitted models ranged from 0.08 to 0.15 for MILK, 0.06 to 0.14 for FAT, 0.08 to 0.12 for PROT, and 0.07 to 0.13 for SNF according to days in milk of first
Estimating gravity wave parameters from oblique high-frequency backscatter: Modeling and analysis
Bristow, W. A.; Greenwald, R. A.
1995-01-01
A new technique for estimating electron density perturbation amplitudes of traveling ionospheric disturbances (TIDs), using HF radar data, is presented. TIDs are observed in HF radar data as enhancements of the ground-scattered power which propagate through the radar's field of view. These TIDs are the ionospheric manifestation of atmospheric acoustic-gravity waves. TID electron density perturbation amplitudes were estimated by simulating the radar returns, using HF ray tracing through a model ionosphere perturbed by a model gravity wave. The simulation determined the return power in the ground-scattered portion of the signal as a function of range, and this was compared to HF radar data from the Goose Bay HF radar at a time when evidence of gravity waves was present in the data. By varying the amplitude of the electron density perturbation in the model it was possible to estimate the perturbation of the actual wave. It was found that the perturbations that are observed by the Goose Bay HF radar are of the order of 20% to 35%. It was also found that the number of observable power enhancements, and the relative amplitudes of these enhancements, depended on the vertical thickness of the gravity wave's source region. From the simulations and observations it was estimated that the source region for the case presented here was approximately 20 km thick. In addition, the energy in the wave packet was calculated and compared to an estimate of the available energy in the source region. It was found that the wave energy was about 0.2% of the estimated available source region energy.
METHODOLOGY FOR THE ESTIMATION OF PARAMETERS, OF THE MODIFIED BOUC-WEN MODEL
Directory of Open Access Journals (Sweden)
Tomasz HANISZEWSKI
2015-03-01
Full Text Available Bouc-Wen model is theoretical formulation that allows to reflect real hysteresis loop of modeled object. Such object is for example a wire rope, which is present on equipment of crane lifting mechanism. Where adopted modified version of the model has nine parameters. Determination of such a number of parameters is complex and problematic issue. In this article are shown the methodology to identify and sample results of numerical simulations. The results were compared with data obtained on the basis of laboratory tests of ropes [3] and on their basis it was found that there is compliance between results and there is possibility to apply in dynamic systems containing in their structures wire ropes [4].
Torki-Harchegani, Mehdi; Ghanbarian, Davoud; Sadeghi, Morteza
2015-08-01
To design new dryers or improve existing drying equipments, accurate values of mass transfer parameters is of great importance. In this study, an experimental and theoretical investigation of drying whole lemons was carried out. The whole lemons were dried in a convective hot air dryer at different air temperatures (50, 60 and 75 °C) and a constant air velocity (1 m s-1). In theoretical consideration, three moisture transfer models including Dincer and Dost model, Bi- G correlation approach and conventional solution of Fick's second law of diffusion were used to determine moisture transfer parameters and predict dimensionless moisture content curves. The predicted results were then compared with the experimental data and the higher degree of prediction accuracy was achieved by the Dincer and Dost model.
Lithium-ion Battery Electrothermal Model, Parameter Estimation, and Simulation Environment
Directory of Open Access Journals (Sweden)
Simone Orcioni
2017-03-01
Full Text Available The market for lithium-ion batteries is growing exponentially. The performance of battery cells is growing due to improving production technology, but market request is growing even more rapidly. Modeling and characterization of single cells and an efficient simulation environment is fundamental for the development of an efficient battery management system. The present work is devoted to defining a novel lumped electrothermal circuit of a single battery cell, the extraction procedure of the parameters of the single cell from experiments, and a simulation environment in SystemC-WMS for the simulation of a battery pack. The electrothermal model of the cell was validated against experimental measurements obtained in a climatic chamber. The model is then used to simulate a 48-cell battery, allowing statistical variations among parameters. The different behaviors of the cells in terms of state of charge, current, voltage, or heat flow rate can be observed in the results of the simulation environment.
Shan, Bonan; Wang, Jiang; Deng, Bin; Wei, Xile; Yu, Haitao; Zhang, Zhen; Li, Huiyan
2016-07-01
This paper proposes an epilepsy detection and closed-loop control strategy based on Particle Swarm Optimization (PSO) algorithm. The proposed strategy can effectively suppress the epileptic spikes in neural mass models, where the epileptiform spikes are recognized as the biomarkers of transitions from the normal (interictal) activity to the seizure (ictal) activity. In addition, the PSO algorithm shows capabilities of accurate estimation for the time evolution of key model parameters and practical detection for all the epileptic spikes. The estimation effects of unmeasurable parameters are improved significantly compared with unscented Kalman filter. When the estimated excitatory-inhibitory ratio exceeds a threshold value, the epileptiform spikes can be inhibited immediately by adopting the proportion-integration controller. Besides, numerical simulations are carried out to illustrate the effectiveness of the proposed method as well as the potential value for the model-based early seizure detection and closed-loop control treatment design.
Rafal Podlaski; Francis A. Roesch
2013-01-01
Study assessed the usefulness of various methods for choosing the initial values for the numerical procedures for estimating the parameters of mixture distributions and analysed variety of mixture models to approximate empirical diameter at breast height (dbh) distributions. Two-component mixtures of either the Weibull distribution or the gamma distribution were...
Paek, Insu; Cai, Li
2014-01-01
The present study was motivated by the recognition that standard errors (SEs) of item response theory (IRT) model parameters are often of immediate interest to practitioners and that there is currently a lack of comparative research on different SE (or error variance-covariance matrix) estimation procedures. The present study investigated item…
Paek, Insu; Cai, Li
2014-01-01
The present study was motivated by the recognition that standard errors (SEs) of item response theory (IRT) model parameters are often of immediate interest to practitioners and that there is currently a lack of comparative research on different SE (or error variance-covariance matrix) estimation procedures. The present study investigated item…
Pan, Xinpeng; Zhang, Guangzhi; Yin, Xingyao
2017-10-01
Estimation of effective geostress parameters is fundamental to the trajectory design and hydraulic fracturing in shale-gas reservoirs. Considering the shale characteristics of excellent stratification, well-developed cracks or fractures and small-scale pores, an effective or suitable shale anisotropic rock-physics model contributes to achieving the accurate prediction of effective geostress parameters in shale-gas reservoirs. In this paper, we first built a shale anisotropic rock-physics model with orthorhombic symmetry, which helps to calculate the anisotropic and geomechanical parameters under the orthorhombic assumption. Then, we introduced an anisotropic stress model with orthorhombic symmetry compared with an isotropic stress model and a transversely isotropic stress model. Combining the effective estimation of the pore pressure and the vertical stress parameters, we finally obtained the effective geostress parameters including the minimum and maximum horizontal stress parameters, providing a useful guide for the exploration and development in shale-gas reservoirs. Of course, ultimately the optimal choice of the hydraulic-fracturing area may also take into consideration other multi-factors such as the rock brittleness, cracks or fractures, and hydrocarbon distribution.
Yu, Tung Fai; Wilson, Adrian J
2014-05-01
In this paper we present an experimental method of parameterising the passive mechanical characteristics of the bicep and tricep muscles in vivo, by fitting the dynamics of a two muscle arm model incorporating anatomically meaningful and structurally identifiable modified Hill muscle models to measured elbow movements. Measurements of the passive flexion and extension of the elbow joint were obtained using 3D motion capture, from which the elbow angle trajectories were determined and used to obtain the spring constants and damping coefficients in the model through parameter estimation. Four healthy subjects were used in the experiments. Anatomical lengths and moment of inertia values of the subjects were determined by direct measurement and calculation. There was good reproducibility in the measured arm movement between trials, and similar joint angle trajectory characteristics were seen between subjects. Each subject had their own set of fitted parameter values determined and the results showed good agreement between measured and simulated data. The average fitted muscle parallel spring constant across all subjects was 143 N/m and the average fitted muscle parallel damping constant was 1.73 Ns/m. The passive movement method was proven to be successful, and can be applied to other joints in the human body, where muscles with similar actions are grouped together. Copyright © 2013 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
Lehto, N J; Sochaczewski, L; Davison, W; Tych, W; Zhang, H
2008-03-01
Diffusive gradients in thin films (DGT) is a dynamic, in situ measuring technique that can be used to supply diverse information on concentrations and behaviour of solutes. When deployed in soils and sediments, quantitative interpretation of DGT measurements requires the use of a numerical model. An improved version of the DGT induced fluxes in soils and sediments model (DIFS), working in two dimensions (2D DIFS), was used to investigate the accuracy with which DGT measurements can be used to estimate the distribution coefficient for labile metal (KD) and the response time of the soil to depletion (TC). The 2D DIFS model was used to obtain values of KD and TC for Cd, Zn and Ni in three different soils, which were compared to values determined previously using 1D DIFS for these cases. While the 1D model was shown to provide reasonable estimates of KD, the 2D model refined the estimates of the kinetic parameters. Desorption rate constants were shown to be similar for all three metals and lower than previously thought. Calculation of an error function as KD and TC are systematically varied showed the spread of KD and TC values that fit the experimental data equally well. These automatically generated error maps reflected the quality of the data and provided an appraisal of the accuracy of parameter estimation. They showed that in some cases parameter accuracy could be improved by fitting the model to a sub-set of data.
Ottesen, Johnny T; Mehlsen, Jesper; Olufsen, Mette S
2014-11-01
We consider the inverse and patient specific problem of short term (seconds to minutes) heart rate regulation specified by a system of nonlinear ODEs and corresponding data. We show how a recent method termed the structural correlation method (SCM) can be used for model reduction and for obtaining a set of practically identifiable parameters. The structural correlation method includes two steps: sensitivity and correlation analysis. When combined with an optimization step, it is possible to estimate model parameters, enabling the model to fit dynamics observed in data. This method is illustrated in detail on a model predicting baroreflex regulation of heart rate and applied to analysis of data from a rat and healthy humans. Numerous mathematical models have been proposed for prediction of baroreflex regulation of heart rate, yet most of these have been designed to provide qualitative predictions of the phenomena though some recent models have been developed to fit observed data. In this study we show that the model put forward by Bugenhagen et al. can be simplified without loss of its ability to predict measured data and to be interpreted physiologically. Moreover, we show that with minimal changes in nominal parameter values the simplified model can be adapted to predict observations from both rats and humans. The use of these methods make the model suitable for estimation of parameters from individuals, allowing it to be adopted for diagnostic procedures.
Directory of Open Access Journals (Sweden)
Lucia Švábová
2015-09-01
Full Text Available Financial derivatives are a widely used tool for investors to hedge against the risk caused by changes in asset prices in the financial markets. A usual type of hedging derivative is an asset option. In case of unexpected changes in asset prices, in the investment portfolio, the investor will exercise the option to eliminate losses resulting from these changes. Therefore, it is necessary to include the options in the investor´s portfolio in such a ratio that the losses caused by decreasing of assets prices will be covered by profits from those options. Futures option is a type of call or put option to buy or to sell an option contract at a designated strike price. The change in price of the underlying assets or underlying futures contract causes a change in the prices of options themselves. For investor exercising option as a tool for risk insurance, it is important to quantify these changes. The dependence of option price changes, on the underlying asset or futures option price changes, can be expressed by the parameter delta. The value of delta determines the composition of the portfolio to be risk-neutral. The parameter delta is calculated as a derivation of the option price with respect to the price of the underlying asset, if the option price formula exists. But for some types of more complex options, the analytical formula does not exist, so calculation of delta by derivation is not possible. However, it is possible to estimate the value of delta numerically using the principles of the numerical method called “Finite Difference Method.” In the paper the parameter delta for a Futures call option calculated from the analytical formula and estimated from the Finite difference method are compared.
Parameter Estimation in Multivariate Gamma Distribution
Directory of Open Access Journals (Sweden)
V S Vaidyanathan
2015-05-01
Full Text Available Multivariate gamma distribution finds abundant applications in stochastic modelling, hydrology and reliability. Parameter estimation in this distribution is a challenging one as it involves many parameters to be estimated simultaneously. In this paper, the form of multivariate gamma distribution proposed by Mathai and Moschopoulos [10] is considered. This form has nice properties in terms of marginal and conditional densities. A new method of estimation based on optimal search is proposed for estimating the parameters using the marginal distributions and the concepts of maximum likelihood, spacings and least squares. The proposed methodology is easy to implement and is free from calculus. It optimizes the objective function by searching over a wide range of values and determines the estimate of the parameters. The consistency of the estimates is demonstrated in terms of mean, standard deviation and mean square error through simulation studies for different choices of parameters.
ESTIMATION ACCURACY OF EXPONENTIAL DISTRIBUTION PARAMETERS
Directory of Open Access Journals (Sweden)
muhammad zahid rashid
2011-04-01
Full Text Available The exponential distribution is commonly used to model the behavior of units that have a constant failure rate. The two-parameter exponential distribution provides a simple but nevertheless useful model for the analysis of lifetimes, especially when investigating reliability of technical equipment.This paper is concerned with estimation of parameters of the two parameter (location and scale exponential distribution. We used the least squares method (LSM, relative least squares method (RELS, ridge regression method (RR, moment estimators (ME, modified moment estimators (MME, maximum likelihood estimators (MLE and modified maximum likelihood estimators (MMLE. We used the mean square error MSE, and total deviation TD, as measurement for the comparison between these methods. We determined the best method for estimation using different values for the parameters and different sample sizes
ESTIMATION ACCURACY OF EXPONENTIAL DISTRIBUTION PARAMETERS
Directory of Open Access Journals (Sweden)
muhammad zahid rashid
2011-04-01
Full Text Available The exponential distribution is commonly used to model the behavior of units that have a constant failure rate. The two-parameter exponential distribution provides a simple but nevertheless useful model for the analysis of lifetimes, especially when investigating reliability of technical equipment.This paper is concerned with estimation of parameters of the two parameter (location and scale exponential distribution. We used the least squares method (LSM, relative least squares method (RELS, ridge regression method (RR, moment estimators (ME, modified moment estimators (MME, maximum likelihood estimators (MLE and modified maximum likelihood estimators (MMLE. We used the mean square error MSE, and total deviation TD, as measurement for the comparison between these methods. We determined the best method for estimation using different values for the parameters and different sample sizes
Gharamti, M. E.
2015-05-11
The ensemble Kalman filter (EnKF) is a popular method for state-parameters estimation of subsurface flow and transport models based on field measurements. The common filtering procedure is to directly update the state and parameters as one single vector, which is known as the Joint-EnKF. In this study, we follow the one-step-ahead smoothing formulation of the filtering problem, to derive a new joint-based EnKF which involves a smoothing step of the state between two successive analysis steps. The new state-parameters estimation scheme is derived in a consistent Bayesian filtering framework and results in separate update steps for the state and the parameters. This new algorithm bears strong resemblance with the Dual-EnKF, but unlike the latter which first propagates the state with the model then updates it with the new observation, the proposed scheme starts by an update step, followed by a model integration step. We exploit this new formulation of the joint filtering problem and propose an efficient model-integration-free iterative procedure on the update step of the parameters only for further improved performances. Numerical experiments are conducted with a two-dimensional synthetic subsurface transport model simulating the migration of a contaminant plume in a heterogenous aquifer domain. Contaminant concentration data are assimilated to estimate both the contaminant state and the hydraulic conductivity field. Assimilation runs are performed under imperfect modeling conditions and various observational scenarios. Simulation results suggest that the proposed scheme efficiently recovers both the contaminant state and the aquifer conductivity, providing more accurate estimates than the standard Joint and Dual EnKFs in all tested scenarios. Iterating on the update step of the new scheme further enhances the proposed filter’s behavior. In term of computational cost, the new Joint-EnKF is almost equivalent to that of the Dual-EnKF, but requires twice more model
Directory of Open Access Journals (Sweden)
F. C. PEIXOTO
1999-09-01
Full Text Available Fragmentation kinetics is employed to model a continuous reactive mixture of alkanes under catalytic cracking conditions. Standard moment analysis techniques are employed, and a dynamic system for the time evolution of moments of the mixture's dimensionless concentration distribution function (DCDF is found. The time behavior of the DCDF is recovered with successive estimations of scaled gamma distributions using the moments time data.
Li, Mao-Fen; Fan, Li; Liu, Hong-Bin; Guo, Peng-Tao; Wu, Wei
2013-01-01
Estimation of daily global solar radiation (Rs) from routinely measured temperature data has been widely developed and used in many different areas of the world. However, many of them are site specific. It is assumed that a general model for estimating daily Rs using temperature variables and geographical parameters could be achieved within a climatic region. This paper made an attempt to develop a general model to estimate daily Rs using routinely measured temperature data (maximum (Tmax, °C) and minimum (Tmin, °C) temperatures) and site geographical parameters (latitude (La, °N), longitude (Ld, °E) and altitude (Alt, m)) for Guizhou and Sichuan basin of southwest China, which was classified into the hot summer and cold winter climate zone. Comparison analysis was carried out through statistics indicators such as root mean squared error of percentage (RMSE%), modeling efficiency (ME), coefficient of residual mass (CRM) and mean bias error (MBE). Site-dependent daily Rs estimating models were calibrated and validated using long-term observed weather data. A general formula was then obtained from site geographical parameters and the better fit site-dependent models with mean RMSE% of 38.68%, mean MBE of 0.381 MJ m-2 d-1, mean CRM of 0.04 and mean ME value of 0.713.
Fatolazadeh, Farzam; Naeeni, Mehdi Raoofian; Voosoghi, Behzad; Rahimi, Armin
2017-07-01
In this study, an inversion method is used to constrain the fault parameters of the 2010 Chile Earthquake using gravimetric observations. The formulation consists of using monthly Geopotential coefficients of GRACE observations in a conjunction with the analytical model of Okubo 1992 which accounts for the gravity changes resulting from Earthquake. At first, it is necessary to eliminate the hydrological and oceanic effects from GRACE monthly coefficients and then a spatio-spectral localization analysis, based on wavelet local analysis, should be used to filter the GRACE observations and to better refine the tectonic signal. Finally, the corrected GRACE observations are compared with the analytical model using a nonlinear inversion algorithm. Our results show discernible differences between the computed average slip using gravity observations and those predicted from other co-seismic models. In this study, fault parameters such as length, width, depth, dip, strike and slip are computed using the changes in gravity and gravity gradient components. By using the variations of gravity gradient components the above mentioned parameters are determined as 428 ± 6 Km, 203 ± 5 Km, 5 Km, 10°, 13° and 8 ± 1.2 m respectively. Moreover, the values of the seismic moment and moment magnitude are 2. 09 × 1022 N m and 8.88 Mw respectively which show the small differences with the values reported from USGS (1. 8 × 1022N m and 8.83 Mw).
Parameter Estimation in Multivariate Gamma Distribution
V S Vaidyanathan; R Vani Lakshmi
2015-01-01
Multivariate gamma distribution finds abundant applications in stochastic modelling, hydrology and reliability. Parameter estimation in this distribution is a challenging one as it involves many parameters to be estimated simultaneously. In this paper, the form of multivariate gamma distribution proposed by Mathai and Moschopoulos [10] is considered. This form has nice properties in terms of marginal and conditional densities. A new method of estimation based on optimal search is proposed for...
Directory of Open Access Journals (Sweden)
Qihong Duan
2012-01-01
Full Text Available We study a multistate model for an aging piece of equipment under condition-based maintenance and apply an expectation maximization algorithm to obtain maximum likelihood estimates of the model parameters. Because of the monitoring discontinuity, we cannot observe any state's duration. The observation consists of the equipment's state at an inspection or right after a repair. Based on a proper construction of stochastic processes involved in the model, calculation of some probabilities and expectations becomes tractable. Using these probabilities and expectations, we can apply an expectation maximization algorithm to estimate the parameters in the model. We carry out simulation studies to test the accuracy and the efficiency of the algorithm.
Parameter estimation and accuracy matching strategies for 2-D reactor models
Nowak, U.; Grah, A.; Schreier, M.
2005-11-01
The mathematical modelling of a special modular catalytic reactor kit leads to a system of partial differential equation in two space dimensions. As customary, this model contains uncertain physical parameters, which may be adapted to fit experimental data. To solve this nonlinear least-squares problem we apply a damped Gauss-Newton method. A method of lines approach is used to evaluate the associated model equations. By an a priori spatial discretization, a large DAE system is derived and integrated with an adaptive, linearly implicit extrapolation method. For sensitivity evaluation we apply an internal numerical differentiation technique, which reuses linear algebra information from the model integration. In order not to interfere with the control of the Gauss-Newton iteration these computations are done usually very accurately and, therefore, with substantial cost. To overcome this difficulty, we discuss several accuracy adaptation strategies, e.g., a master-slave mode. Finally, we present some numerical experiments.
Energy Technology Data Exchange (ETDEWEB)
Gullberg, Grant T.; Huesman, Ronald H.; Reutter, Bryan W.; Qi,Jinyi; Ghosh Roy, Dilip N.
2004-01-01
In dynamic cardiac SPECT estimates of kinetic parameters ofa one-compartment perfusion model are usually obtained in a two stepprocess: 1) first a MAP iterative algorithm, which properly models thePoisson statistics and the physics of the data acquisition, reconstructsa sequence of dynamic reconstructions, 2) then kinetic parameters areestimated from time activity curves generated from the dynamicreconstructions. This paper provides a method for calculating thecovariance matrix of the kinetic parameters, which are determined usingweighted least squares fitting that incorporates the estimated varianceand covariance of the dynamic reconstructions. For each transaxial slicesets of sequential tomographic projections are reconstructed into asequence of transaxial reconstructions usingfor each reconstruction inthe time sequence an iterative MAP reconstruction to calculate themaximum a priori reconstructed estimate. Time-activity curves for a sumof activity in a blood region inside the left ventricle and a sum in acardiac tissue region are generated. Also, curves for the variance of thetwo estimates of the sum and for the covariance between the two ROIestimates are generated as a function of time at convergence using anexpression obtained from the fixed-point solution of the statisticalerror of the reconstruction. A one-compartment model is fit to the tissueactivity curves assuming a noisy blood input function to give weightedleast squares estimates of blood volume fraction, wash-in and wash-outrate constants specifying the kinetics of 99mTc-teboroxime for theleftventricular myocardium. Numerical methods are used to calculate thesecond derivative of the chi-square criterion to obtain estimates of thecovariance matrix for the weighted least square parameter estimates. Eventhough the method requires one matrix inverse for each time interval oftomographic acquisition, efficient estimates of the tissue kineticparameters in a dynamic cardiac SPECT study can be obtained with
Optimal Estimation of Phenological Crop Model Parameters for Rice (Oryza sativa)
Sharifi, H.; Hijmans, R. J.; Espe, M.; Hill, J. E.; Linquist, B.
2015-12-01
Crop phenology models are important components of crop growth models. In the case of phenology models, generally only a few parameters are calibrated and default cardinal temperatures are used which can lead to a temperature-dependent systematic phenology prediction error. Our objective was to evaluate different optimization approaches in the Oryza2000 and CERES-Rice phenology sub-models to assess the importance of optimizing cardinal temperatures on model performance and systematic error. We used two optimization approaches: the typical single-stage (planting to heading) and three-stage model optimization (for planting to panicle initiation (PI), PI to heading (HD), and HD to physiological maturity (MT)) to simultaneously optimize all model parameters. Data for this study was collected over three years and six locations on seven California rice cultivars. A temperature-dependent systematic error was found for all cultivars and stages, however it was generally small (systematic error changes in cardinal temperature relative to the default values and thus optimization of cardinal temperatures did not affect systematic error or model performance. Compared to single stage optimization, three-stage optimization had little effect on determining time to PI or HD but significantly improved the precision in determining the time from HD to MT: the RMSE reduced from an average of 6 to 3.3 in Oryza2000 and from 6.6 to 3.8 in CERES-Rice. With regards to systematic error, we found a trade-off between RMSE and systematic error when optimization objective set to minimize RMSE or systematic error. Therefore, it is important to find the limits within which the trade-offs between RMSE and systematic error are acceptable, especially in climate change studies where this can prevent erroneous conclusions.
Directory of Open Access Journals (Sweden)
Zeng Gengsheng L
2012-09-01
Full Text Available Abstract Background Compared with static imaging, dynamic emission computed tomographic imaging with compartment modeling can quantify in vivo physiologic processes, providing useful information about molecular disease processes. Dynamic imaging involves estimation of kinetic rate parameters. For multi-compartment models, kinetic parameter estimation can be computationally demanding and problematic with local minima. Methods This paper offers a new perspective to the compartment model fitting problem where Fourier linear system theory is applied to derive closed-form formulas for estimating kinetic parameters for the two-compartment model. The proposed Fourier domain estimation method provides a unique solution, and offers very different noise response as compared to traditional non-linear chi-squared minimization techniques. Results The unique feature of the proposed Fourier domain method is that only low frequency components are used for kinetic parameter estimation, where the DC (i.e., the zero frequency component in the data is treated as the most important information, and high frequency components that tend to be corrupted by statistical noise are discarded. Computer simulations show that the proposed method is robust without having to specify the initial condition. The resultant solution can be fine tuned using the traditional iterative method. Conclusions The proposed Fourier-domain estimation method has closed-form formulas. The proposed Fourier-domain curve-fitting method does not require an initial condition, it minimizes a quadratic objective function and a closed-form solution can be obtained. The noise is easier to control, simply by discarding the high frequency components, and emphasizing the DC component.
Directory of Open Access Journals (Sweden)
David A Ball
Full Text Available The use of microfluidics in live cell imaging allows the acquisition of dense time-series from individual cells that can be perturbed through computer-controlled changes of growth medium. Systems and synthetic biologists frequently perform gene expression studies that require changes in growth conditions to characterize the stability of switches, the transfer function of a genetic device, or the oscillations of gene networks. It is rarely possible to know a priori at what times the various changes should be made, and the success of the experiment is unknown until all of the image processing is completed well after the completion of the experiment. This results in wasted time and resources, due to the need to repeat the experiment to fine-tune the imaging parameters. To overcome this limitation, we have developed an adaptive imaging platform called GenoSIGHT that processes images as they are recorded, and uses the resulting data to make real-time adjustments to experimental conditions. We have validated this closed-loop control of the experiment using galactose-inducible expression of the yellow fluorescent protein Venus in Saccharomyces cerevisiae. We show that adaptive imaging improves the reproducibility of gene expression data resulting in more accurate estimates of gene network parameters while increasing productivity ten-fold.
Ball, David A; Lux, Matthew W; Adames, Neil R; Peccoud, Jean
2014-01-01
The use of microfluidics in live cell imaging allows the acquisition of dense time-series from individual cells that can be perturbed through computer-controlled changes of growth medium. Systems and synthetic biologists frequently perform gene expression studies that require changes in growth conditions to characterize the stability of switches, the transfer function of a genetic device, or the oscillations of gene networks. It is rarely possible to know a priori at what times the various changes should be made, and the success of the experiment is unknown until all of the image processing is completed well after the completion of the experiment. This results in wasted time and resources, due to the need to repeat the experiment to fine-tune the imaging parameters. To overcome this limitation, we have developed an adaptive imaging platform called GenoSIGHT that processes images as they are recorded, and uses the resulting data to make real-time adjustments to experimental conditions. We have validated this closed-loop control of the experiment using galactose-inducible expression of the yellow fluorescent protein Venus in Saccharomyces cerevisiae. We show that adaptive imaging improves the reproducibility of gene expression data resulting in more accurate estimates of gene network parameters while increasing productivity ten-fold.
Cosmological Parameter Estimation from SN Ia data: a Model-Independent Approach
Benitez-Herrera, S; Maturi, M; Hillebrandt, W; Bartelmann, M; Röpke, F; .,
2013-01-01
We perform a model independent reconstruction of the cosmic expansion rate based on type Ia supernova data. Using the Union 2.1 data set, we show that the Hubble parameter behaviour allowed by the data without making any hypothesis about cosmological model or underlying gravity theory is consistent with a flat LCDM universe having H_0 = 70.43 +- 0.33 and Omega_m=0.297 +- 0.020, weakly dependent on the choice of initial scatter matrix. This is in closer agreement with the recently released Planck results (H_0 = 67.3 +- 1.2, Omega_m = 0.314 +- 0.020) than other standard analyses based on type Ia supernova data. We argue this might be an indication that, in order to tackle subtle deviations from the standard cosmological model present in type Ia supernova data, it is mandatory to go beyond parametrized approaches.
Directory of Open Access Journals (Sweden)
Huan Yang
2016-12-01
Full Text Available Healthy or pathological states of nociceptive subsystems determine different stimulus-response relations measured from quantitative sensory testing. In turn, stimulus-responses measurements may be used to assess these states.In a recently developed computational model, six model parameters characterize activation of nerve endings and spinal neurons. However, both model nonlinearity and limited information in yes-no detection responses to electrocutaneous stimuli challenge to estimate model parameters. Here, we address the question whether and how one can overcome these difficulties for reliable parameter estimation. First, we fit the computational model to experimental stimulus-response pairs by maximizing the likelihood. To evaluate the balance between model fit and complexity, we evaluate the Bayesian Information Criterion. We find that the computational model is better than a conventional logistic model regarding the balance. Second, our theoretical analysis suggests to vary the pulse width among applied stimuli as a necessary condition to prevent structural non-identifiability. In addition, the numerically implemented profile likelihood approach reveals structural and practical non-identifiability. Our model-based approach with integration of psychophysical measurements can be useful for a reliable assessment of states of the nociceptive system.
Fan, M.
2015-03-29
Parameter estimation is a challenging computational problemin the reverse engineering of biological systems. Because advances in biotechnology have facilitated wide availability of time-series gene expression data, systematic parameter esti- mation of gene circuitmodels fromsuch time-series mRNA data has become an importantmethod for quantitatively dissecting the regulation of gene expression. By focusing on themodeling of gene circuits, we examine here the perform- ance of three types of state-of-the-art parameter estimation methods: population-basedmethods, onlinemethods and model-decomposition-basedmethods. Our results show that certain population-basedmethods are able to generate high- quality parameter solutions. The performance of thesemethods, however, is heavily dependent on the size of the param- eter search space, and their computational requirements substantially increase as the size of the search space increases. In comparison, onlinemethods andmodel decomposition-basedmethods are computationally faster alternatives and are less dependent on the size of the search space. Among other things, our results show that a hybrid approach that augments computationally fastmethods with local search as a subsequent refinement procedure can substantially increase the qual- ity of their parameter estimates to the level on par with the best solution obtained fromthe population-basedmethods whilemaintaining high computational speed. These suggest that such hybridmethods can be a promising alternative to themore commonly used population-basedmethods for parameter estimation of gene circuit models when limited prior knowledge about the underlying regulatorymechanismsmakes the size of the parameter search space vastly large. © The Author 2015. Published by Oxford University Press.
Quach, Minh; Brunel, Nicolas; d'Alché-Buc, Florence
2007-12-01
Statistical inference of biological networks such as gene regulatory networks, signaling pathways and metabolic networks can contribute to build a picture of complex interactions that take place in the cell. However, biological systems considered as dynamical, non-linear and generally partially observed processes may be difficult to estimate even if the structure of interactions is given. Using the same approach as Sitz et al. proposed in another context, we derive non-linear state-space models from ODEs describing biological networks. In this framework, we apply Unscented Kalman Filtering (UKF) to the estimation of both parameters and hidden variables of non-linear state-space models. We instantiate the method on a transcriptional regulatory model based on Hill kinetics and a signaling pathway model based on mass action kinetics. We successfully use synthetic data and experimental data to test our approach. This approach covers a large set of biological networks models and gives rise to simple and fast estimation algorithms. Moreover, the Bayesian tool used here directly provides uncertainty estimates on parameters and hidden states. Let us also emphasize that it can be coupled with structure inference methods used in Graphical Probabilistic Models. Matlab code available on demand.
State and parameter estimation in bio processes
Energy Technology Data Exchange (ETDEWEB)
Maher, M.; Roux, G.; Dahhou, B. [Centre National de la Recherche Scientifique (CNRS), 31 - Toulouse (France)]|[Institut National des Sciences Appliquees (INSA), 31 - Toulouse (France)
1994-12-31
A major difficulty in monitoring and control of bio-processes is the lack of reliable and simple sensors for following the evolution of the main state variables and parameters such as biomass, substrate, product, growth rate, etc... In this article, an adaptive estimation algorithm is proposed to recover the state and parameters in bio-processes. This estimator utilizes the physical process model and the reference model approach. Experimentations concerning estimation of biomass and product concentrations and specific growth rate, during batch, fed-batch and continuous fermentation processes are presented. The results show the performance of this adaptive estimation approach. (authors) 12 refs.
A practical approach to parameter estimation applied to model predicting heart rate regulation
DEFF Research Database (Denmark)
Olufsen, Mette; Ottesen, Johnny T.
2013-01-01
baroreceptor feedback regulation of heart rate during head-up tilt. The three methods include: structured analysis of the correlation matrix, analysis via singular value decomposition followed by QR factorization, and identification of the subspace closest to the one spanned by eigenvectors of the model...... Hessian. Results showed that all three methods facilitate identification of a parameter subset. The “best” subset was obtained using the structured correlation method, though this method was also the most computationally intensive. Subsets obtained using the other two methods were easier to compute...
Mullah, Muhammad Abu Shadeque; Benedetti, Andrea
2016-11-01
Besides being mainly used for analyzing clustered or longitudinal data, generalized linear mixed models can also be used for smoothing via restricting changes in the fit at the knots in regression splines. The resulting models are usually called semiparametric mixed models (SPMMs). We investigate the effect of smoothing using SPMMs on the correlation and variance parameter estimates for serially correlated longitudinal normal, Poisson and binary data. Through simulations, we compare the performance of SPMMs to other simpler methods for estimating the nonlinear association such as fractional polynomials, and using a parametric nonlinear function. Simulation results suggest that, in general, the SPMMs recover the true curves very well and yield reasonable estimates of the correlation and variance parameters. However, for binary outcomes, SPMMs produce biased estimates of the variance parameters for high serially correlated data. We apply these methods to a dataset investigating the association between CD4 cell count and time since seroconversion for HIV infected men enrolled in the Multicenter AIDS Cohort Study.
Word, Daniel P; Cummings, Derek A T; Burke, Donald S; Iamsirithaworn, Sopon; Laird, Carl D
2012-08-07
Mathematical models can enhance our understanding of childhood infectious disease dynamics, but these models depend on appropriate parameter values that are often unknown and must be estimated from disease case data. In this paper, we develop a framework for efficient estimation of childhood infectious disease models with seasonal transmission parameters using continuous differential equations containing model and measurement noise. The problem is formulated using the simultaneous approach where all state variables are discretized, and the discretized differential equations are included as constraints, giving a large-scale algebraic nonlinear programming problem that is solved using a nonlinear primal-dual interior-point solver. The technique is demonstrated using measles case data from three different locations having different school holiday schedules, and our estimates of the seasonality of the transmission parameter show strong correlation to school term holidays. Our approach gives dramatic efficiency gains, showing a 40-400-fold reduction in solution time over other published methods. While our approach has an increased susceptibility to bias over techniques that integrate over the entire unknown state-space, a detailed simulation study shows no evidence of bias. Furthermore, the computational efficiency of our approach allows for investigation of a large model space compared with more computationally intensive approaches.
Ekinci, Yunus Levent; Balkaya, Çağlayan; Göktürkler, Gökhan; Turan, Seçil
2016-06-01
An efficient approach to estimate model parameters from residual gravity data based on differential evolution (DE), a stochastic vector-based metaheuristic algorithm, has been presented. We have showed the applicability and effectiveness of this algorithm on both synthetic and field anomalies. According to our knowledge, this is a first attempt of applying DE for the parameter estimations of residual gravity anomalies due to isolated causative sources embedded in the subsurface. The model parameters dealt with here are the amplitude coefficient (A), the depth and exact origin of causative source (zo and xo, respectively) and the shape factors (q and ƞ). The error energy maps generated for some parameter pairs have successfully revealed the nature of the parameter estimation problem under consideration. Noise-free and noisy synthetic single gravity anomalies have been evaluated with success via DE/best/1/bin, which is a widely used strategy in DE. Additionally some complicated gravity anomalies caused by multiple source bodies have been considered, and the results obtained have showed the efficiency of the algorithm. Then using the strategy applied in synthetic examples some field anomalies observed for various mineral explorations such as a chromite deposit (Camaguey district, Cuba), a manganese deposit (Nagpur, India) and a base metal sulphide deposit (Quebec, Canada) have been considered to estimate the model parameters of the ore bodies. Applications have exhibited that the obtained results such as the depths and shapes of the ore bodies are quite consistent with those published in the literature. Uncertainty in the solutions obtained from DE algorithm has been also investigated by Metropolis-Hastings (M-H) sampling algorithm based on simulated annealing without cooling schedule. Based on the resulting histogram reconstructions of both synthetic and field data examples the algorithm has provided reliable parameter estimations being within the sampling limits of
Zhang, Yonggen; Schaap, Marcel G.
2017-04-01
Pedotransfer functions (PTFs) have been widely used to predict soil hydraulic parameters in favor of expensive laboratory or field measurements. Rosetta (Schaap et al., 2001, denoted as Rosetta1) is one of many PTFs and is based on artificial neural network (ANN) analysis coupled with the bootstrap re-sampling method which allows the estimation of van Genuchten water retention parameters (van Genuchten, 1980, abbreviated here as VG), saturated hydraulic conductivity (Ks), and their uncertainties. In this study, we present an improved set of hierarchical pedotransfer functions (Rosetta3) that unify the water retention and Ks submodels into one. Parameter uncertainty of the fit of the VG curve to the original retention data is used in the ANN calibration procedure to reduce bias of parameters predicted by the new PTF. One thousand bootstrap replicas were used to calibrate the new models compared to 60 or 100 in Rosetta1, thus allowing the uni-variate and bi-variate probability distributions of predicted parameters to be quantified in greater detail. We determined the optimal weights for VG parameters and Ks, the optimal number of hidden nodes in ANN, and the number of bootstrap replicas required for statistically stable estimates. Results show that matric potential-dependent bias was reduced significantly while root mean square error (RMSE) for water content were reduced modestly; RMSE for Ks was increased by 0.9% (H3w) to 3.3% (H5w) in the new models on log scale of Ks compared with the Rosetta1 model. It was found that estimated distributions of parameters were mildly non-Gaussian and could instead be described rather well with heavy-tailed α-stable distributions. On the other hand, arithmetic means had only a small estimation bias for most textures when compared with the mean-like ;shift; parameter of the α-stable distributions. Arithmetic means and (co-)variances are therefore still recommended as summary statistics of the estimated distributions. However, it
2008-01-01
Calculating an accurate nutation time constant (NTC), or nutation rate of growth, for a spinning upper stage is important for ensuring mission success. Spacecraft nutation, or wobble, is caused by energy dissipation anywhere in the system. Propellant slosh in the spacecraft fuel tanks is the primary source for this dissipation and, if it is in a state of resonance, the NTC can become short enough to violate mission constraints. The Spinning Slosh Test Rig (SSTR) is a forced-motion spin table where fluid dynamic effects in full-scale fuel tanks can be tested in order to obtain key parameters used to calculate the NTC. We accomplish this by independently varying nutation frequency versus the spin rate and measuring force and torque responses on the tank. This method was used to predict parameters for the Genesis, Contour, and Stereo missions, whose tanks were mounted outboard from the spin axis. These parameters are incorporated into a mathematical model that uses mechanical analogs, such as pendulums and rotors, to simulate the force and torque resonances associated with fluid slosh.
Application of spreadsheet to estimate infiltration parameters
Directory of Open Access Journals (Sweden)
Mohammad Zakwan
2016-09-01
Full Text Available Infiltration is the process of flow of water into the ground through the soil surface. Soil water although contributes a negligible fraction of total water present on earth surface, but is of utmost importance for plant life. Estimation of infiltration rates is of paramount importance for estimation of effective rainfall, groundwater recharge, and designing of irrigation systems. Numerous infiltration models are in use for estimation of infiltration rates. The conventional graphical approach for estimation of infiltration parameters often fails to estimate the infiltration parameters precisely. The generalised reduced gradient (GRG solver is reported to be a powerful tool for estimating parameters of nonlinear equations and it has, therefore, been implemented to estimate the infiltration parameters in the present paper. Field data of infiltration rate available in literature for sandy loam soils of Umuahia, Nigeria were used to evaluate the performance of GRG solver. A comparative study of graphical method and GRG solver shows that the performance of GRG solver is better than that of conventional graphical method for estimation of infiltration rates. Further, the performance of Kostiakov model has been found to be better than the Horton and Philip's model in most of the cases based on both the approaches of parameter estimation.
Gayler, Sebastian; Salima-Sultana, Daisy; Selle, Benny; Ingwersen, Joachim; Wizemann, Hans-Dieter; Högy, Petra; Streck, Thilo
2016-04-01
Soil water extraction by roots affects the dynamics and distribution of soil moisture and controls transpiration, which influences soil-vegetation-atmosphere feedback processes. Consequently, root water uptake requires close attention when predicting water fluxes across the land surface, e.g., in agricultural crop models or in land surface schemes of weather and climate models. The key parameters for a successful simultaneous simulation of soil moisture dynamics and evapotranspiration in Richards equation-based models are the soil hydraulic parameters, which describe the shapes of the soil water retention curve and the soil hydraulic conductivity curve. As measurements of these parameters are expensive and their estimation from basic soil data via pedotransfer functions is rather inaccurate, the values of the soil hydraulic parameters are frequently inversely estimated by fitting the model to measured time series of soil water content and evapotranspiration. It is common to simulate root water uptake and transpiration by simple stress functions, which describe from which soil layer water is absorbed by roots and predict when total crop transpiration is decreased in case of soil water limitations. As for most of the biogeophysical processes simulated in crop and land surface models, there exist several alternative functional relationships for simulating root water uptake and there is no clear reason for preferring one process representation over another. The error associated with alternative representations of root water uptake, however, contributes to structural model uncertainty and the choice of the root water uptake model may have a significant impact on the values of the soil hydraulic parameters estimated inversely. In this study, we use the agroecosystem model system Expert-N to simulate soil moisture dynamics and evapotranspiration at three agricultural field sites located in two contrasting regions in Southwest Germany (Kraichgau, Swabian Alb). The Richards
Parameter and state estimation with a time-dependent adjoint marine ice sheet model
Directory of Open Access Journals (Sweden)
D. N. Goldberg
2013-06-01
Full Text Available To date, assimilation of observations into large-scale ice models has consisted predominantly of time-independent inversions of surface velocities for basal traction, bed elevation, or ice stiffness, and has relied primarily on analytically-derived adjoints of diagnostic ice velocity models. To overcome limitations of such "snapshot" inversions, i.e. their inability to assimilate time-dependent data, or to produce initial states with minimum artificial drift and suitable for time-dependent simulations, we have developed an adjoint of a time-dependent parallel glaciological flow model. The model implements a hybrid shallow shelf-shallow ice stress balance, involves a prognostic equation for ice thickness evolution, and can represent the floating, fast-sliding, and frozen bed regimes of a marine ice sheet. The adjoint is generated by a combination of analytic methods and the use of algorithmic differentiation (AD software. Several experiments are carried out with idealized geometries and synthetic observations, including inversion of time-dependent surface elevations for past thicknesses, and simultaneous retrieval of basal traction and topography from surface data. Flexible generation of the adjoint for a range of independent uncertain variables is exemplified through sensitivity calculations of grounded ice volume to changes in basal melting of floating and basal sliding of grounded ice. The results are encouraging and suggest the feasibility, using real observations, of improved ice sheet state estimation and comprehensive transient sensitivity assessments.
Wu, Hongjie; Yuan, Shifei; Zhang, Xi; Yin, Chengliang; Ma, Xuerui
2015-08-01
To improve the suitability of lithium-ion battery model under varying scenarios, such as fluctuating temperature and SoC variation, dynamic model with parameters updated realtime should be developed. In this paper, an incremental analysis-based auto regressive exogenous (I-ARX) modeling method is proposed to eliminate the modeling error caused by the OCV effect and improve the accuracy of parameter estimation. Then, its numerical stability, modeling error, and parametric sensitivity are analyzed at different sampling rates (0.02, 0.1, 0.5 and 1 s). To identify the model parameters recursively, a bias-correction recursive least squares (CRLS) algorithm is applied. Finally, the pseudo random binary sequence (PRBS) and urban dynamic driving sequences (UDDSs) profiles are performed to verify the realtime performance and robustness of the newly proposed model and algorithm. Different sampling rates (1 Hz and 10 Hz) and multiple temperature points (5, 25, and 45 °C) are covered in our experiments. The experimental and simulation results indicate that the proposed I-ARX model can present high accuracy and suitability for parameter identification without using open circuit voltage.
Liepe, Juliane; Kirk, Paul; Filippi, Sarah; Toni, Tina; Barnes, Chris P.; Stumpf, Michael P.H.
2016-01-01
As modeling becomes a more widespread practice in the life- and biomedical sciences, we require reliable tools to calibrate models against ever more complex and detailed data. Here we present an approximate Bayesian computation framework and software environment, ABC-SysBio, which enables parameter estimation and model selection in the Bayesian formalism using Sequential Monte-Carlo approaches. We outline the underlying rationale, discuss the computational and practical issues, and provide detailed guidance as to how the important tasks of parameter inference and model selection can be carried out in practice. Unlike other available packages, ABC-SysBio is highly suited for investigating in particular the challenging problem of fitting stochastic models to data. Although computationally expensive, the additional insights gained in the Bayesian formalism more than make up for this cost, especially in complex problems. PMID:24457334
Parameter estimation and inverse problems
Aster, Richard C; Thurber, Clifford H
2005-01-01
Parameter Estimation and Inverse Problems primarily serves as a textbook for advanced undergraduate and introductory graduate courses. Class notes have been developed and reside on the World Wide Web for faciliting use and feedback by teaching colleagues. The authors'' treatment promotes an understanding of fundamental and practical issus associated with parameter fitting and inverse problems including basic theory of inverse problems, statistical issues, computational issues, and an understanding of how to analyze the success and limitations of solutions to these probles. The text is also a practical resource for general students and professional researchers, where techniques and concepts can be readily picked up on a chapter-by-chapter basis.Parameter Estimation and Inverse Problems is structured around a course at New Mexico Tech and is designed to be accessible to typical graduate students in the physical sciences who may not have an extensive mathematical background. It is accompanied by a Web site that...
Kim, Jang-Gyeong; Kwon, Hyun-Han; Kim, Dongkyun
2017-01-01
Poisson cluster stochastic rainfall generators (e.g., modified Bartlett-Lewis rectangular pulse, MBLRP) have been widely applied to generate synthetic sub-daily rainfall sequences. The MBLRP model reproduces the underlying distribution of the rainfall generating process. The existing optimization techniques are typically based on individual parameter estimates that treat each parameter as independent. However, parameter estimates sometimes compensate for the estimates of other parameters, which can cause high variability in the results if the covariance structure is not formally considered. Moreover, uncertainty associated with model parameters in the MBLRP rainfall generator is not usually addressed properly. Here, we develop a hierarchical Bayesian model (HBM)-based MBLRP model to jointly estimate parameters across weather stations and explicitly consider the covariance and uncertainty through a Bayesian framework. The model is tested using weather stations in South Korea. The HBM-based MBLRP model improves the identification of parameters with better reproduction of rainfall statistics at various temporal scales. Additionally, the spatial variability of the parameters across weather stations is substantially reduced compared to that of other methods.
Estimation of semolina dough rheological parameters by inversion of a finite elements model
Directory of Open Access Journals (Sweden)
Angelo Fabbri
2015-10-01
Full Text Available The description of the rheological properties of food material plays an important role in food engineering. Particularly for the optimisation of pasta manufacturing process (extrusion is needful to know the rheological properties of semolina dough. Unfortunately characterisation of non-Newtonian fluids, such as food doughs, requires a notable time effort, especially in terms of number of tests to be carried out. The present work proposes an alternative method, based on the combination of laboratory measurement, made with a simplified tool, with the inversion of a finite elements numerical model. To determine the rheological parameters, an objective function, defined as the distance between simulation and experimental data, was considered and the well-known Levenberg-Marqard optimisation algorithm was used. In order to verify the feasibility of the method, the rheological characterisation of the dough was carried also by a traditional procedure. Results shown that the difference between measurements of rheological parameters of the semolina dough made with traditional procedure and inverse methods are very small (maximum percentage error equal to 3.6%. This agreement supports the coherence of the inverse method that, in general, may be used to characterise many non-Newtonian materials.
N.G. HOSSEIN-ZADEH
2008-01-01
Data on stillbirth from the Animal Breeding Center of Iran collected from January 1990 to December 2007 and comprising 668810 Holstein calving events from 2506 herds were analyzed. Linear and threshold animal and sire models were used to estimate genetic parameters and genetic trends for stillbirth in the first, second, and third parities. Mean incidence of stillbirth decreased from first to third parities: 23.7%, 22.1%, and 21.8%, respectively. Phenotypic rates of stillbirth decreased from 1...
Zhang, Hongjuan; Hendricks-Franssen, Harrie-Jan; Han, Xujun; Vrugt, Jasper A.; Vereecken, Harry
2016-04-01
Land surface models (LSMs) resolve the water and energy balance with different parameters and state variables. Many of the parameters of these models cannot be measured directly in the field, and require calibration against flux and soil moisture data. Two LSMs are used in our work: Variable Infiltration Capacity Hydrologic Model (VIC) and the Community Land Model (CLM). Temporal variations in soil moisture content at 5, 20 and 50 cm depth in the Rollesbroich experimental watershed in Germany are simulated in both LSMs. Data assimilation (DA) provides a good way to jointly estimate soil moisture content and soil properties of the resolved soil domain. Four DA methods combined with the two LSMs are used in our work: the Ensemble Kalman Filter (EnKF) using state augmentation or dual estimation, the Residual Resampling Particle Filter (RRPF) and Markov chain Monte Carlo Particle Filter (MCMCPF). These four DA methods are tuned and calibrated for a five month period, and subsequently evaluated for another five month period. Performances of the two LSMs and the four DA methods are compared. Our results show that all DA methods improve the estimation of soil moisture content of the VIC and CLM models, especially if the soil hydraulic properties (VIC), the maximum baseflow velocity (VIC) and/or soil texture (CLM) are jointly estimated with soil moisture content. The augmentation and dual estimation methods performed slightly better than RRPF and MCMCPF in the evaluation period. The differences in simulated soil moisture content between CLM and VIC were larger than variations among the DA methods. The CLM performed better than the VIC model. The strong underestimation of soil moisture content in the third layer of the VIC model is likely related to an inadequate parameterization of groundwater drainage.
Institute of Scientific and Technical Information of China (English)
芦会彬; 薄翠梅; 杨世品
2015-01-01
In order to solve the non-linear and high-dimensional optimization problems more effectively, an improved self-adaptive membrane computing (ISMC) optimization algorithm was proposed. The proposed ISMC algorithm applied improved self-adaptive crossover and mutation formulae that can provide appropriate crossover operator and mutation operator based on different functions of the objects and the number of iterations. The performance of ISMC was tested by the benchmark functions. The simulation results for residue hydrogenating kinetics model parameter estimation show that the proposed method is superior to the traditional intelligent algorithms in terms of convergence accuracy and stability in solving the complex parameter optimization problems.
Grasman, Johan; van Deventer, Willem B E; van Laar, Vincent
2012-12-01
Parameters of a Bertalanffy type of temperature dependent growth model are fitted using data from a population of stone loach (Barbatula barbatula). Over two periods respectively in 1990 and 2010 length data of this population has been collected at a lowland stream in the central part of the Netherlands. The estimation of the maximum length of a fully grown individual is given special attention because it is in fact found as the result of an extrapolation over a large interval of the entire lifetime. It is concluded that this parameter should not at forehand be set at one fixed value for the population at that location due to varying conditions over the years.
Kinetic Parameters Estimation of MgO-C Refractory by Shrinking Core Model
Institute of Scientific and Technical Information of China (English)
B.Hashemi; Z.A.Nemati; S.K. Sadrnezhaad; Z.A.Moghimi
2006-01-01
Kinetics of oxidation of MgO-C refractories was investigated by shrinking core modeling of the gas-solid reactions taking place during heating the porous materials to the high temperatures. Samples containing 4.5～17 wt pct graphite were isothermally oxidized at 1000～1350℃. Weight loss data was compared with predictions of the model. A mixed 2-stage mechanism comprised of pore diffusion plus boundary layer gas transfer was shown to generally control the oxidation rate. Pore diffusion was however more effective, especially at graphite contents lower than 10 wt pct under forced convection blowing of the air. Model calculations showed that effective gas diffusion coefficients were in the range of 0.08 to 0.55 cm2/s. These values can be utilized to determine the corresponding tortuosity factors of 6.85 to 2.22. Activation energies related to the pore diffusion mechanism appeared to be around (46.4±2)kJ/mol. The estimated intermolecular diffusion coefficients were shown to be independent of the graphite content, when the percentage of the graphite exceeded a marginal value of 10.
Adaptive hybrid optimization strategy for calibration and parameter estimation of physical models
Vesselinov, Velimir V
2011-01-01
A new adaptive hybrid optimization strategy, entitled squads, is proposed for complex inverse analysis of computationally intensive physical models. The new strategy is designed to be computationally efficient and robust in identification of the global optimum (e.g. maximum or minimum value of an objective function). It integrates a global Adaptive Particle Swarm Optimization (APSO) strategy with a local Levenberg-Marquardt (LM) optimization strategy using adaptive rules based on runtime performance. The global strategy optimizes the location of a set of solutions (particles) in the parameter space. The LM strategy is applied only to a subset of the particles at different stages of the optimization based on the adaptive rules. After the LM adjustment of the subset of particle positions, the updated particles are returned to the APSO strategy. The advantages of coupling APSO and LM in the manner implemented in squads is demonstrated by comparisons of squads performance against Levenberg-Marquardt (LM), Particl...
Institute of Scientific and Technical Information of China (English)
WANG Qing; WU Kaiyuan; ZHANG Tianjiao; KONG Yi＇nan; QIAN Weiqi
2012-01-01
Aerodynamic modeling and parameter estimation from quick accesses recorder (QAR) data is an important technical way to analyze the effects of highland weather conditions upon aerodynamic characteristics of airplane.It is also an essential content of flight accident analysis.The related techniques are developed in the present paper,including the geometric method for angle of attack and sideslip angle estimation,the extended Kalman filter associated with modified Bryson-Frazier smoother (EKF-MBF) method for aerodynamic coefficient identification,the radial basis function (RBF) neural network method for aerodynamic modeling,and the Delta method for stability/control derivative estimation.As an application example,the QAR data of a civil airplane approaching a high-altitude airport are processed and the aerodynamic coefficient and derivative estimates are obtained.The estimation results are reasonable,which shows that the developed techniques are feasible.The causes for the distribution of aerodynamic derivative estimates are analyzed.Accordingly,several measures to improve estimation accuracy are put forward.
Directory of Open Access Journals (Sweden)
Jesse Whittington
Full Text Available Capture-recapture studies are frequently used to monitor the status and trends of wildlife populations. Detection histories from individual animals are used to estimate probability of detection and abundance or density. The accuracy of abundance and density estimates depends on the ability to model factors affecting detection probability. Non-spatial capture-recapture models have recently evolved into spatial capture-recapture models that directly include the effect of distances between an animal's home range centre and trap locations on detection probability. Most studies comparing non-spatial and spatial capture-recapture biases focussed on single year models and no studies have compared the accuracy of demographic parameter estimates from open population models. We applied open population non-spatial and spatial capture-recapture models to three years of grizzly bear DNA-based data from Banff National Park and simulated data sets. The two models produced similar estimates of grizzly bear apparent survival, per capita recruitment, and population growth rates but the spatial capture-recapture models had better fit. Simulations showed that spatial capture-recapture models produced more accurate parameter estimates with better credible interval coverage than non-spatial capture-recapture models. Non-spatial capture-recapture models produced negatively biased estimates of apparent survival and positively biased estimates of per capita recruitment. The spatial capture-recapture grizzly bear population growth rates and 95% highest posterior density averaged across the three years were 0.925 (0.786-1.071 for females, 0.844 (0.703-0.975 for males, and 0.882 (0.779-0.981 for females and males combined. The non-spatial capture-recapture population growth rates were 0.894 (0.758-1.024 for females, 0.825 (0.700-0.948 for males, and 0.863 (0.771-0.957 for both sexes. The combination of low densities, low reproductive rates, and predominantly negative
Whittington, Jesse; Sawaya, Michael A
2015-01-01
Capture-recapture studies are frequently used to monitor the status and trends of wildlife populations. Detection histories from individual animals are used to estimate probability of detection and abundance or density. The accuracy of abundance and density estimates depends on the ability to model factors affecting detection probability. Non-spatial capture-recapture models have recently evolved into spatial capture-recapture models that directly include the effect of distances between an animal's home range centre and trap locations on detection probability. Most studies comparing non-spatial and spatial capture-recapture biases focussed on single year models and no studies have compared the accuracy of demographic parameter estimates from open population models. We applied open population non-spatial and spatial capture-recapture models to three years of grizzly bear DNA-based data from Banff National Park and simulated data sets. The two models produced similar estimates of grizzly bear apparent survival, per capita recruitment, and population growth rates but the spatial capture-recapture models had better fit. Simulations showed that spatial capture-recapture models produced more accurate parameter estimates with better credible interval coverage than non-spatial capture-recapture models. Non-spatial capture-recapture models produced negatively biased estimates of apparent survival and positively biased estimates of per capita recruitment. The spatial capture-recapture grizzly bear population growth rates and 95% highest posterior density averaged across the three years were 0.925 (0.786-1.071) for females, 0.844 (0.703-0.975) for males, and 0.882 (0.779-0.981) for females and males combined. The non-spatial capture-recapture population growth rates were 0.894 (0.758-1.024) for females, 0.825 (0.700-0.948) for males, and 0.863 (0.771-0.957) for both sexes. The combination of low densities, low reproductive rates, and predominantly negative population growth
Akhtar, Taimoor; Shoemaker, Christine
2016-04-01
Watershed model calibration is inherently a multi-criteria problem. Conflicting trade-offs exist between different quantifiable calibration criterions indicating the non-existence of a single optimal parameterization. Hence, many experts prefer a manual approach to calibration where the inherent multi-objective nature of the calibration problem is addressed through an interactive, subjective, time-intensive and complex decision making process. Multi-objective optimization can be used to efficiently identify multiple plausible calibration alternatives and assist calibration experts during the parameter estimation process. However, there are key challenges to the use of multi objective optimization in the parameter estimation process which include: 1) multi-objective optimization usually requires many model simulations, which is difficult for complex simulation models that are computationally expensive; and 2) selection of one from numerous calibration alternatives provided by multi-objective optimization is non-trivial. This study proposes a "Hybrid Automatic Manual Strategy" (HAMS) for watershed model calibration to specifically address the above-mentioned challenges. HAMS employs a 3-stage framework for parameter estimation. Stage 1 incorporates the use of an efficient surrogate multi-objective algorithm, GOMORS, for identification of numerous calibration alternatives within a limited simulation evaluation budget. The novelty of HAMS is embedded in Stages 2 and 3 where an interactive visual and metric based analytics framework is available as a decision support tool to choose a single calibration from the numerous alternatives identified in Stage 1. Stage 2 of HAMS provides a goodness-of-fit measure / metric based interactive framework for identification of a small subset (typically less than 10) of meaningful and diverse set of calibration alternatives from the numerous alternatives obtained in Stage 1. Stage 3 incorporates the use of an interactive visual
Load Estimation from Modal Parameters
DEFF Research Database (Denmark)
Aenlle, Manuel López; Brincker, Rune; Fernández, Pelayo Fernández;
2007-01-01
In Natural Input Modal Analysis the modal parameters are estimated just from the responses while the loading is not recorded. However, engineers are sometimes interested in knowing some features of the loading acting on a structure. In this paper, a procedure to determine the loading from a FRF...... matrix assembled from modal parameters and the experimental responses recorded using standard sensors, is presented. The method implies the inversion of the FRF which, in general, is not full rank matrix due to the truncation of the modal space. Furthermore, some ecommendations are included to improve...
Directory of Open Access Journals (Sweden)
N.G. HOSSEIN-ZADEH
2008-12-01
Full Text Available Data on stillbirth from the Animal Breeding Center of Iran collected from January 1990 to December 2007 and comprising 668810 Holstein calving events from 2506 herds were analyzed. Linear and threshold animal and sire models were used to estimate genetic parameters and genetic trends for stillbirth in the first, second, and third parities. Mean incidence of stillbirth decreased from first to third parities: 23.7%, 22.1%, and 21.8%, respectively. Phenotypic rates of stillbirth decreased from 1993 to 1998, for first, second and third calvings, and then increased from 1998 to 2007 for the first three parities. Direct heritability estimates of stillbirth for parities 1, 2 and 3 ranged from 2.2 to 8.7%, 0.6 to 5.1% and 0.1 to 3.8%, respectively, and maternal heritability estimates of stillbirth for parities 1, 2 and 3 ranged from 1.4 to 6.3%, 0.5 to 4.2% and 0.08 to 2.0%, respectively, using linear and threshold animal models. The threshold sire model estimates of heritabilities for stillbirth in this study were 0.021 to 0.071, while the linear sire model estimates of heritabilities for stillbirth in the current study were from 0.003 to 0.021 over the parities. There was a slightly increasing genetic trend for stillbirth rate in parities 1 and 2 over time with the analysis of linear animal and linear sire models. There was a significant decreasing genetic trend for stillbirth rate in parity 1 and 3 over time with the analysis of threshold animal and threshold sire models, but the genetic trend for stillbirth rate in parity 2 with these models of analysis was significantly positive. The low estimates of heritability obtained in this study implied that much of the improvement in stillbirth could be attained by improvement of production environment rather than genetic selection.;
Penas, David R; González, Patricia; Egea, Jose A; Doallo, Ramón; Banga, Julio R
2017-01-21
The development of large-scale kinetic models is one of the current key issues in computational systems biology and bioinformatics. Here we consider the problem of parameter estimation in nonlinear dynamic models. Global optimization methods can be used to solve this type of problems but the associated computational cost is very large. Moreover, many of these methods need the tuning of a number of adjustable search parameters, requiring a number of initial exploratory runs and therefore further increasing the computation times. Here we present a novel parallel method, self-adaptive cooperative enhanced scatter search (saCeSS), to accelerate the solution of this class of problems. The method is based on the scatter search optimization metaheuristic and incorporates several key new mechanisms: (i) asynchronous cooperation between parallel processes, (ii) coarse and fine-grained parallelism, and (iii) self-tuning strategies. The performance and robustness of saCeSS is illustrated by solving a set of challenging parameter estimation problems, including medium and large-scale kinetic models of the bacterium E. coli, bakerés yeast S. cerevisiae, the vinegar fly D. melanogaster, Chinese Hamster Ovary cells, and a generic signal transduction network. The results consistently show that saCeSS is a robust and efficient method, allowing very significant reduction of computation times with respect to several previous state of the art methods (from days to minutes, in several cases) even when only a small number of processors is used. The new parallel cooperative method presented here allows the solution of medium and large scale parameter estimation problems in reasonable computation times and with small hardware requirements. Further, the method includes self-tuning mechanisms which facilitate its use by non-experts. We believe that this new method can play a key role in the development of large-scale and even whole-cell dynamic models.
Energy Technology Data Exchange (ETDEWEB)
Laroche, E. [Centre National de la Recherche Scientifique (CNRS UMR), Lab. des Sciences de l' Image, de l' Informatique et de la Teledetection, 67 - Illkirch (France); Durieu, C.; Louis, J.P. [Centre National de la Recherche Scientifique (CNRS UPRESA 8029), Lab. d' Electricite, Signaux et Robotique, 94 - Cachan (France)
2002-07-01
Many parametric models of induction machines in sinusoidal mode, some of which account for saturation and iron losses, are available. These models must not only be identifiable, they must also provide for an accurate estimation of physical parameters. In this paper, parameter estimation errors due to measurement noise and model errors are analyzed. The most perturbed cases, such as those neglecting saturation or iron losses, are given special consideration herein. This study allows drawing conclusions as to the practical identifiability of the various models. Results are then used to design optimal experiments in which parameter estimation errors have been minimized. (authors)
Directory of Open Access Journals (Sweden)
V. M. Khade
2013-03-01
Full Text Available The ensemble adjustment Kalman filter (EAKF is used to estimate the erodibility fraction parameter field in a coupled meteorology and dust aerosol model (Coupled Ocean/Atmosphere Mesoscale Prediction System (COAMPS over the Sahara desert. Erodibility is often employed as the key parameter to map dust source. It is used along with surface winds (or surface wind stress to calculate dust emissions. Using the Saharan desert as a test bed, a perfect model Observation System Simulation Experiments (OSSEs with 40 ensemble members, and observations of aerosol optical depth (AOD, the EAKF is shown to recover correct values of erodibility at about 80% of the points in the domain. It is found that dust advected from upstream grid points acts as noise and complicates erodibility estimation. It is also found that the rate of convergence is significantly impacted by the structure of the initial distribution of erodibility estimates; isotropic initial distributions exhibit slow convergence, while initial distributions with geographically localized structure converge more quickly. Experiments using observations of Deep Blue AOD retrievals from the MODIS satellite sensor result in erodibility estimates that are considerably lower than the values used operationally. Verification shows that the use of the tuned erodibility field results in better predictions of AOD over the west Sahara and the Arabian Peninsula.
Kampf, Stephanie K.; Burges, Stephen J.
2007-12-01
We use an inverse simulation strategy to estimate soil hydraulic parameter values for an extensively measured planar hillslope plot in Seattle, Washington, United States. Both the integrated (subsurface outflow) and internal (piezometer water levels, volumetric water contents) hydrologic responses are measured at the plot. Inverse simulation scenarios are configured in the physics-based variably saturated hydrologic model, HYDRUS-2D, for a nonhysteretic drainage scenario starting from saturated initial conditions. Multiple inverse simulations calibrate the model either to single-measurement time series or to combinations of multiple types of measurements. Inverse simulations calibrated to different types of measurements give a wide range of parameter combinations, including over 2 orders of magnitude in predicted saturated hydraulic conductivity (Ks), in part because the calibrations to a single measurement type are poorly constrained and biased. Parameter values are better constrained with multiobjective inverse simulations (Ks from 30 to 55 cm h-1). All parameter combinations from inverse simulations were tested in 2-month-long continuous simulations of the plot flow response to natural precipitation and evapotranspiration. The long-term outflow response was predicted best (Nash-Sutcliffe E = 0.94) by the parameters from a multiobjective inverse simulation calibrated to both the outflow and the piezometer water levels. Overall results show that for an assumed nonhysteretic soil a physics-based hydrologic response model can be calibrated using one short-duration drainage-from-saturation event if both integrated (outflow) and internal (saturated water level) measurements are used as calibration objectives.
Reji, T K; Ravi, P M; Ajith, T L; Dileep, B N; Hegde, A G; Sarkar, P K
2012-04-01
Tritium content in air moisture, soil water, rain water and plant water samples collected around the Kaiga site, India was estimated and the scavenging ratio, wet deposition velocity and ratio of specific activities of tritium between soil water and air moisture were calculated and the results are interpreted. Scavenging ratio was found to vary from 0.06 to 1.04 with a mean of 0.46. The wet deposition velocity of tritium observed in the present study was in the range of 3.3E-03 to 1.1E-02 m s(-1) with a mean of 6.6E-03 m s(-1). The ratio of specific activity of tritium in soil moisture to that in air moisture ranged from 0.17 to 0.95 with a mean of 0.49. The specific activity of tritium in plant water in this study varied from 73 to 310 Bq l(-1). The present study is very useful for understanding the process and modelling of transfer of tritium through air/soil/plant system at the Kaiga site.
Cabassi, Giovanni; Cavalli, Daniele; Borrelli, Lamberto; Degano, Luigi; Marino Gallina, Pietro
2014-05-01
The use of simulation models to study the turnover of soil organic matter (SOM) can support experimental data interpretation and the optimization of manure management. Icbm/2 (Katter, 2001) is a SOM simulation model that describes the turnover of SOM with three pools : one for old humified SOM (CO) and two for added manure, CL ( labile "young" C) and CS (stable "young" C). C outflows from CL and CR to be humified (h) and lost as CO2-C (1-h). All pools decay with firs-order kinetics with parameter kYL, kYR and kO (fig. 1).With this model of SOM turnover, during manure decomposition into the soil, only the evolved CO2 can be easily measured. Near infrared spectroscopy has been proved to be a useful technique for soil C evaluation. Since different soil C pools are expected to have different chemical composition, it was proven that NIR can be used as a cheap technique to develop calibration models to estimate the amount of C belonging to different pools). The aim of this work was compare the calibration of ICBM/2 using C respiration data or optimal NIR prediction of CO and CL pools. A total of six laboratory treatments were established using the same soil corresponding to the application of five fertilisers and a control treatment: 1) control without N fertilisation; 2) ammonium sulphate; 3) anaerobically digested dairy cow slurry (Digested slurry); 4-5) the liquid (Liquid fraction) and solid (Solid fraction) fractions after mechanical separation of Digested slurry; and 6) anaerobically stored dairy cow slurry (Stored slurry). The "nursery" method was used with 12 sampling dates. NIR analysis were performed on the air dried grounded soils. Spectra were collected using an FT-NIR Spectrometer. Parameters calibration was done separately for each soil using the downhill simplex method. For each manure, a C partitioning factor (Fi) was optimised. In each optimization step respiration measured data or NIR estimates CL and CO were used as imput for minimisation objective
Colossi, Bibiana; Fleischmann, Ayan; Siqueira, Vinicius; Bitar, Ahmad Al; Paiva, Rodrigo; Fan, Fernando; Ruhoff, Anderson; Pontes, Paulo; Collischonn, Walter
2017-04-01
Large scale representation of soil moisture conditions can be achieved through hydrological simulation and remote sensing techniques. However, both methodologies have several limitations, which suggests the potential benefits of using both information together. So, this study had two main objectives: perform a cross-validation between remotely sensed soil moisture from SMOS (Soil Moisture and Ocean Salinity) L3 product and soil moisture simulated with the large scale hydrological model MGB-IPH; and to evaluate the potential benefits of including remotely sensed soil moisture for model parameter estimation. The study analyzed results in South American continent, where hydrometeorological monitoring is usually scarce. The study was performed in Paraná River Basin, an important South American basin, whose extension and particular characteristics allow the representation of different climatic, geological, and, consequently, hydrological conditions. Soil moisture estimated with SMOS was transformed from water content to a Soil Water Index (SWI) so it is comparable to the saturation degree simulated with MGB-IPH model. The multi-objective complex evolution algorithm (MOCOM-UA) was applied for model automatic calibration considering only remotely sensed soil moisture, only discharge and both information together. Results show that this type of analysis can be very useful, because it allows to recognize limitations in model structure. In the case of the hydrological model calibration, this approach can avoid the use of parameters out of range, in an attempt to compensate model limitations. Also, it indicates aspects of the model were efforts should be concentrated, in order to improve hydrological or hydraulics process representation. Automatic calibration gives an estimative about the way different information can be applied and the quality of results it might lead. We emphasize that these findings can be valuable for hydrological modeling in large scale South American
Directory of Open Access Journals (Sweden)
H. Zhang
2017-09-01
Full Text Available Land surface models (LSMs use a large cohort of parameters and state variables to simulate the water and energy balance at the soil–atmosphere interface. Many of these model parameters cannot be measured directly in the field, and require calibration against measured fluxes of carbon dioxide, sensible and/or latent heat, and/or observations of the thermal and/or moisture state of the soil. Here, we evaluate the usefulness and applicability of four different data assimilation methods for joint parameter and state estimation of the Variable Infiltration Capacity Model (VIC-3L and the Community Land Model (CLM using a 5-month calibration (assimilation period (March–July 2012 of areal-averaged SPADE soil moisture measurements at 5, 20, and 50 cm depths in the Rollesbroich experimental test site in the Eifel mountain range in western Germany. We used the EnKF with state augmentation or dual estimation, respectively, and the residual resampling PF with a simple, statistically deficient, or more sophisticated, MCMC-based parameter resampling method. The performance of the calibrated LSM models was investigated using SPADE water content measurements of a 5-month evaluation period (August–December 2012. As expected, all DA methods enhance the ability of the VIC and CLM models to describe spatiotemporal patterns of moisture storage within the vadose zone of the Rollesbroich site, particularly if the maximum baseflow velocity (VIC or fractions of sand, clay, and organic matter of each layer (CLM are estimated jointly with the model states of each soil layer. The differences between the soil moisture simulations of VIC-3L and CLM are much larger than the discrepancies among the four data assimilation methods. The EnKF with state augmentation or dual estimation yields the best performance of VIC-3L and CLM during the calibration and evaluation period, yet results are in close agreement with the PF using MCMC resampling. Overall, CLM demonstrated the
Uncertainty Analysis in the Noise Parameters Estimation
Directory of Open Access Journals (Sweden)
Pawlik P.
2012-07-01
Full Text Available The new approach to the uncertainty estimation in modelling acoustic hazards by means of the interval arithmetic is presented in the paper. In the case of the noise parameters estimation the selection of parameters specifying the acoustic wave propagation in an open space as well as parameters which are required in a form of average values – often constitutes a difficult problem. In such case, it is necessary to determine the variance and then, related strictly to it, the uncertainty of model parameters. The application of the interval arithmetic formalism allows to estimate the input data uncertainties without the necessity of the determination their probability distribution, which is required by other methods of uncertainty assessment. A successive problem in the acoustic hazards estimation is a lack of the exact knowledge of the input parameters. In connection with the above, the analysis of the modelling uncertainty in dependence of inaccuracy of model parameters was performed. To achieve this aim the interval arithmetic formalism – representing the value and its uncertainty in a form of an interval – was applied. The proposed approach was illustrated by the example of the application the Dutch RMR SRM Method, recommended by the European Union Directive 2002/49/WE, in the railway noise modelling.
White, Corey N; Servant, Mathieu; Logan, Gordon D
2017-03-29
Researchers and clinicians are interested in estimating individual differences in the ability to process conflicting information. Conflict processing is typically assessed by comparing behavioral measures like RTs or error rates from conflict tasks. However, these measures are hard to interpret because they can be influenced by additional processes like response caution or bias. This limitation can be circumvented by employing cognitive models to decompose behavioral data into components of underlying decision processes, providing better specificity for investigating individual differences. A new class of drift-diffusion models has been developed for conflict tasks, presenting a potential tool to improve analysis of individual differences in conflict processing. However, measures from these models have not been validated for use in experiments with limited data collection. The present study assessed the validity of these models with a parameter-recovery study to determine whether and under what circumstances the models provide valid measures of cognitive processing. Three models were tested: the dual-stage two-phase model (Hübner, Steinhauser, & Lehle, Psychological Review, 117(3), 759-784, 2010), the shrinking spotlight model (White, Ratcliff, & Starns, Cognitive Psychology, 63(4), 210-238, 2011), and the diffusion model for conflict tasks (Ulrich, Schröter, Leuthold, & Birngruber, Cogntive Psychology, 78, 148-174, 2015). The validity of the model parameters was assessed using different methods of fitting the data and different numbers of trials. The results show that each model has limitations in recovering valid parameters, but they can be mitigated by adding constraints to the model. Practical recommendations are provided for when and how each model can be used to analyze data and provide measures of processing in conflict tasks.
Zhang, Feng
2017-03-01
Part 1 of this paper presented an improved shale rock physics model to enable the prediction of anisotropy parameters from both vertical and horizontal well logs. The predicted elastic constants were demonstrated using the published laboratory measurements of a Greenhorn shale in paper 1, and are more accurate than the estimations in the existing literature. In this paper, this model is applied to the well log data of an Upper Triassic shale formation to predict the VTI anisotropy parameters, which are usually difficult to measure directly in the borehole. The effective elastic constants are calculated for solid clay, aligned clay-fluid-kerogen, a rotated clay-fluid-kerogen mixture and shale step by step using different effective medium theories. The input to this workflow includes the volume fraction of minerals, kerogen and two different pore spaces. Two parameters (the lamination index and pore aspect ratio) need to be inverted simultaneously by fitting the vertical or horizontal logs. An estimation of the anisotropy parameters from the vertical well logs uses a least square inversion in terms of C 33 and C 44. The result is demonstrated by calibration with the seismic amplitude versus angle (AVA) response. Correlations are found between the anisotropy parameters (ε and δ) and rock properties (pore aspect ratio, lamination index, clay content and total porosity). In the horizontal well, the anisotropy parameters are predicted by minimizing the objective function in terms of C 11 and C 44. The overestimated qP-wave velocity of clay-rich shales in the horizontal well is anisotropy-corrected and thus provides a more appropriate V p–V s relation. The impact of strong VTI anisotropy on Poisson’s ratio is also overcome by the anisotropy-correction, thus improving the brittleness characterization of shale reservoirs.
El Gharamti, Mohamad
2014-02-01
The accuracy of groundwater flow and transport model predictions highly depends on our knowledge of subsurface physical parameters. Assimilation of contaminant concentration data from shallow dug wells could help improving model behavior, eventually resulting in better forecasts. In this paper, we propose a joint state-parameter estimation scheme which efficiently integrates a low-rank extended Kalman filtering technique, namely the Singular Evolutive Extended Kalman (SEEK) filter, with the prominent complex-step method (CSM). The SEEK filter avoids the prohibitive computational burden of the Extended Kalman filter by updating the forecast along the directions of error growth only, called filter correction directions. CSM is used within the SEEK filter to efficiently compute model derivatives with respect to the state and parameters along the filter correction directions. CSM is derived using complex Taylor expansion and is second order accurate. It is proven to guarantee accurate gradient computations with zero numerical round-off errors, but requires complexifying the numerical code. We perform twin-experiments to test the performance of the CSM-based SEEK for estimating the state and parameters of a subsurface contaminant transport model. We compare the efficiency and the accuracy of the proposed scheme with two standard finite difference-based SEEK filters as well as with the ensemble Kalman filter (EnKF). Assimilation results suggest that the use of the CSM in the context of the SEEK filter may provide up to 80% more accurate solutions when compared to standard finite difference schemes and is competitive with the EnKF, even providing more accurate results in certain situations. We analyze the results based on two different observation strategies. We also discuss the complexification of the numerical code and show that this could be efficiently implemented in the context of subsurface flow models. © 2013 Elsevier B.V.
Silvestro, F.; Gabellani, S.; Rudari, R.; Delogu, F.; Laiolo, P.; Boni, G.
2015-04-01
During the last decade the opportunity and usefulness of using remote-sensing data in hydrology, hydrometeorology and geomorphology has become even more evident and clear. Satellite-based products often allow for the advantage of observing hydrologic variables in a distributed way, offering a different view with respect to traditional observations that can help with understanding and modeling the hydrological cycle. Moreover, remote-sensing data are fundamental in scarce data environments. The use of satellite-derived digital elevation models (DEMs), which are now globally available at 30 m resolution (e.g., from Shuttle Radar Topographic Mission, SRTM), have become standard practice in hydrologic model implementation, but other types of satellite-derived data are still underutilized. As a consequence there is the need for developing and testing techniques that allow the opportunities given by remote-sensing data to be exploited, parameterizing hydrological models and improving their calibration. In this work, Meteosat Second Generation land-surface temperature (LST) estimates and surface soil moisture (SSM), available from European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT) H-SAF, are used together with streamflow observations (S. N.) to calibrate the Continuum hydrological model that computes such state variables in a prognostic mode. The first part of the work aims at proving that satellite observations can be exploited to reduce uncertainties in parameter calibration by reducing the parameter equifinality that can become an issue in forecast mode. In the second part, four parameter estimation strategies are implemented and tested in a comparative mode: (i) a multi-objective approach that includes both satellite and ground observations which is an attempt to use different sources of data to add constraints to the parameters; (ii and iii) two approaches solely based on remotely sensed data that reproduce the case of a scarce data
Ghotbi, Saba; Sotoudeheian, Saeed; Arhami, Mohammad
2016-09-01
Satellite remote sensing products of AOD from MODIS along with appropriate meteorological parameters were used to develop statistical models and estimate ground-level PM10. Most of previous studies obtained meteorological data from synoptic weather stations, with rather sparse spatial distribution, and used it along with 10 km AOD product to develop statistical models, applicable for PM variations in regional scale (resolution of ≥10 km). In the current study, meteorological parameters were simulated with 3 km resolution using WRF model and used along with the rather new 3 km AOD product (launched in 2014). The resulting PM statistical models were assessed for a polluted and largely variable urban area, Tehran, Iran. Despite the critical particulate pollution problem, very few PM studies were conducted in this area. The issue of rather poor direct PM-AOD associations existed, due to different factors such as variations in particles optical properties, in addition to bright background issue for satellite data, as the studied area located in the semi-arid areas of Middle East. Statistical approach of linear mixed effect (LME) was used, and three types of statistical models including single variable LME model (using AOD as independent variable) and multiple variables LME model by using meteorological data from two sources, WRF model and synoptic stations, were examined. Meteorological simulations were performed using a multiscale approach and creating an appropriate physic for the studied region, and the results showed rather good agreements with recordings of the synoptic stations. The single variable LME model was able to explain about 61%-73% of daily PM10 variations, reflecting a rather acceptable performance. Statistical models performance improved through using multivariable LME and incorporating meteorological data as auxiliary variables, particularly by using fine resolution outputs from WRF (R2 = 0.73-0.81). In addition, rather fine resolution for PM
Jalayer, Fatemeh; Ebrahimian, Hossein
2014-05-01
Introduction The first few days elapsed after the occurrence of a strong earthquake and in the presence of an ongoing aftershock sequence are quite critical for emergency decision-making purposes. Epidemic Type Aftershock Sequence (ETAS) models are used frequently for forecasting the spatio-temporal evolution of seismicity in the short-term (Ogata, 1988). The ETAS models are epidemic stochastic point process models in which every earthquake is a potential triggering event for subsequent earthquakes. The ETAS model parameters are usually calibrated a priori and based on a set of events that do not belong to the on-going seismic sequence (Marzocchi and Lombardi 2009). However, adaptive model parameter estimation, based on the events in the on-going sequence, may have several advantages such as, tuning the model to the specific sequence characteristics, and capturing possible variations in time of the model parameters. Simulation-based methods can be employed in order to provide a robust estimate for the spatio-temporal seismicity forecasts in a prescribed forecasting time interval (i.e., a day) within a post-main shock environment. This robust estimate takes into account the uncertainty in the model parameters expressed as the posterior joint probability distribution for the model parameters conditioned on the events that have already occurred (i.e., before the beginning of the forecasting interval) in the on-going seismic sequence. The Markov Chain Monte Carlo simulation scheme is used herein in order to sample directly from the posterior probability distribution for ETAS model parameters. Moreover, the sequence of events that is going to occur during the forecasting interval (and hence affecting the seismicity in an epidemic type model like ETAS) is also generated through a stochastic procedure. The procedure leads to two spatio-temporal outcomes: (1) the probability distribution for the forecasted number of events, and (2) the uncertainty in estimating the
Institute of Scientific and Technical Information of China (English)
LAI Rui-xun; FANG Hong-wei; HE Guo-jian; YU Xin; YANG Ming; WANG Ming
2013-01-01
In this paper,both state variables and parameters of one-dimensional open channel model are estimated using a framework of the Ensemble Kalman Filter (EnKF).Compared with observation,the predicted accuracy of water level and discharge are improved while the parameters of the model are identified simultaneously.With the principles of the EnKF,a state-space description of the Saint-Venant equation is constructed by perturbing the measurements with Gaussian error distribution.At the same time,the roughness,one of the key parameters in one-dimensional open channel,is also considered as a state variable to identify its value dynamically.The updated state variables and the parameters are then used as the initial values of the next time step to continue the assimilation process.The usefulness and the capability of the dual EnKF are demonstrated in the lower Yellow River during the water-sediment regulation in 2009.In the optimization process,the errors between the prediction and the observation are analyzed,and the rationale of inverse roughness is discussed.It is believed that (1) the flexible approach of the dual EnKF can improve the accuracy of predicting water level and discharge,(2) it provides a probabilistic way to identify the model error which is feasible to implement but hard to handle in other filter systems,and (3) it is practicable for river engineering and management.
A time-series analysis framework for the flood-wave method to estimate groundwater model parameters
Obergfell, Christophe; Bakker, Mark; Maas, Kees
2016-11-01
The flood-wave method is implemented within the framework of time-series analysis to estimate aquifer parameters for use in a groundwater model. The resulting extended flood-wave method is applicable to situations where groundwater fluctuations are affected significantly by time-varying precipitation and evaporation. Response functions for time-series analysis are generated with an analytic groundwater model describing stream-aquifer interaction. Analytical response functions play the same role as the well function in a pumping test, which is to translate observed head variations into groundwater model parameters by means of a parsimonious model equation. An important difference as compared to the traditional flood-wave method and pumping tests is that aquifer parameters are inferred from the combined effects of precipitation, evaporation, and stream stage fluctuations. Naturally occurring fluctuations are separated in contributions from different stresses. The proposed method is illustrated with data collected near a lowland river in the Netherlands. Special emphasis is put on the interpretation of the streambed resistance. The resistance of the streambed is the result of stream-line contraction instead of a semi-pervious streambed, which is concluded through comparison with the head loss calculated with an analytical two-dimensional cross-section model.
Directory of Open Access Journals (Sweden)
Maryam Baharizadeh
2012-10-01
Full Text Available Buffalo milk yield records were obtained from monthly records of the Animal Breeding Organization of Iran from 1992 to 2009 in 33 herds raised in the Khuzestan province. Variance components, heritability and repeatability were estimated for milk yield, fat yield, fat percentage, protein yield and protein percentage. These estimates were carried out through single trait animal model using DFREML program. Herd-year-season was considered as fixed effect in the model. For milk production traits, age at calving was fitted as a covariate. The additive genetic and permanent environmental effects were also included in the model. The mean values (±SD for milk yield, fat yield, fat percentage, protein yield and protein percentage were 2285.08±762.47 kg, 144.35±54.86 kg, 6.25±0.90%, 97.30±26.73 kg and 4.19±0.27%, respectively. The heritability (±SE of milk yield, fat yield, fat percentage, protein yield and protein percentage were 0.093±0.08, 0.054±0.06, 0.043±0.05, 0.093±0.16 and zero, respectively. These estimates for repeatability were 0.272, 0.132, 0.043, 0.674 and 0.0002, respectively. Lower values of genetic parameter estimates require more data and reliable pedigree records.
Directory of Open Access Journals (Sweden)
Nathaniel S. Tarkaa
2015-10-01
Full Text Available The accurate estimation of the traffic parameters, especially traffic intensity that the network must support is a key criterion in the development of an effective Next Generation Network (NGN model. In this paper, starting from data collection involving users of telecommunication services in Benue State, Nigeria, the call rate, data transaction rate, call holding times/data transaction time and traffic intensity have been estimated at the 23 local government headquarters of the State. The existing network in Benue State is GSM based and the services provided are Voice, SMS and Internet. A marketing research was first conducted to determine the level of services usage by the amount of money spent by the high, middle and low income earners. Then using the prevailing tariff rates, the amount of data transferred in bits for the three classes of services were determined. The traffic model used is based on a probabilistic model of events initiated by calls and transactions of NGN services. The model is used to estimate the symmetrical and asymmetrical traffic intensities separately at each of the 23 headquarters representing the network nodes. Generally, the results of the study show that a developing country is characterized by a prevalence of voice and SMS services, and limited Internet services; large number of low income earners; and low rates of call/data transactions and traffic intensities. The study demonstrates a method to estimate traffic parameters at different network nodes starting from subscriber field studies. The use of the method will facilitate the preparation of both business and technical plans for effective and efficient planning and dimensioning of NGN networks in a developing economy.
Directory of Open Access Journals (Sweden)
Yongping Song
2014-09-01
Full Text Available Through-the-wall imaging (TWI radar has been given increasing attention in recent years. However, prior knowledge about environmental parameters, such as wall thickness and dielectric constant, and the standoff distance between an array and a wall, is generally unavailable in real applications. Thus, targets behind the wall suffer from defocusing and displacement under the conventional imag¬ing operations. To solve this problem, in this paper, we first set up an extended imaging model of a virtual aperture obtained by a multiple-input-multiple-output array, which considers the array position to the wall and thus is more applicable for real situations. Then, we present a method to estimate the environmental parameters to calibrate the TWI, without multiple measurements or dominant scatter¬ers behind-the-wall to assist. Simulation and field experi¬ments were performed to illustrate the validity of the pro¬posed imaging model and the environmental parameters estimation method.
Silvestro, F.; Gabellani, S.; Rudari, R.; Delogu, F.; Laiolo, P.; Boni, G.
2014-06-01
During the last decade the opportunity and usefulness of using remote sensing data in hydrology, hydrometeorology and geomorphology has become even more evident and clear. Satellite based products often provide the advantage of observing hydrologic variables in a distributed way while offering a different view that can help to understand and model the hydrological cycle. Moreover, remote sensing data are fundamental in scarce data environments. The use of satellite derived DTM, which are globally available (e.g. from SRTM as used in this work), have become standard practice in hydrologic model implementation, but other types of satellite derived data are still underutilized. In this work, Meteosat Second Generation Land Surface Temperature (LST) estimates and Surface Soil Moisture (SSM) available from EUMETSAT H-SAF are used to calibrate the Continuum hydrological model that computes such state variables in a prognostic mode. This work aims at proving that satellite observations dramatically reduce uncertainties in parameters calibration by reducing their equifinality. Two parameter estimation strategies are implemented and tested: a multi-objective approach that includes ground observations and one solely based on remotely sensed data. Two Italian catchments are used as the test bed to verify the model capability in reproducing long-term (multi-year) simulations.
Parameter estimation in stochastic differential equations
Bishwal, Jaya P N
2008-01-01
Parameter estimation in stochastic differential equations and stochastic partial differential equations is the science, art and technology of modelling complex phenomena and making beautiful decisions. The subject has attracted researchers from several areas of mathematics and other related fields like economics and finance. This volume presents the estimation of the unknown parameters in the corresponding continuous models based on continuous and discrete observations and examines extensively maximum likelihood, minimum contrast and Bayesian methods. Useful because of the current availability of high frequency data is the study of refined asymptotic properties of several estimators when the observation time length is large and the observation time interval is small. Also space time white noise driven models, useful for spatial data, and more sophisticated non-Markovian and non-semimartingale models like fractional diffusions that model the long memory phenomena are examined in this volume.
Energy Technology Data Exchange (ETDEWEB)
Lee, Jared A.; Hacker, Joshua P.; Delle Monache, Luca; Kosović, Branko; Clifton, Andrew; Vandenberghe, Francois; Rodrigo, Javier Sanz
2016-12-14
A current barrier to greater deployment of offshore wind turbines is the poor quality of numerical weather prediction model wind and turbulence forecasts over open ocean. The bulk of development for atmospheric boundary layer (ABL) parameterization schemes has focused on land, partly due to a scarcity of observations over ocean. The 100-m FINO1 tower in the North Sea is one of the few sources worldwide of atmospheric profile observations from the sea surface to turbine hub height. These observations are crucial to developing a better understanding and modeling of physical processes in the marine ABL. In this study, we use the WRF single column model (SCM), coupled with an ensemble Kalman filter from the Data Assimilation Research Testbed (DART), to create 100-member ensembles at the FINO1 location. The goal of this study is to determine the extent to which model parameter estimation can improve offshore wind forecasts.
Vavoulis, Dimitrios V; Straub, Volko A; Aston, John A D; Feng, Jianfeng
2012-01-01
Traditional approaches to the problem of parameter estimation in biophysical models of neurons and neural networks usually adopt a global search algorithm (for example, an evolutionary algorithm), often in combination with a local search method (such as gradient descent) in order to minimize the value of a cost function, which measures the discrepancy between various features of the available experimental data and model output. In this study, we approach the problem of parameter estimation in conductance-based models of single neurons from a different perspective. By adopting a hidden-dynamical-systems formalism, we expressed parameter estimation as an inference problem in these systems, which can then be tackled using a range of well-established statistical inference methods. The particular method we used was Kitagawa's self-organizing state-space model, which was applied on a number of Hodgkin-Huxley-type models using simulated or actual electrophysiological data. We showed that the algorithm can be used to estimate a large number of parameters, including maximal conductances, reversal potentials, kinetics of ionic currents, measurement and intrinsic noise, based on low-dimensional experimental data and sufficiently informative priors in the form of pre-defined constraints imposed on model parameters. The algorithm remained operational even when very noisy experimental data were used. Importantly, by combining the self-organizing state-space model with an adaptive sampling algorithm akin to the Covariance Matrix Adaptation Evolution Strategy, we achieved a significant reduction in the variance of parameter estimates. The algorithm did not require the explicit formulation of a cost function and it was straightforward to apply on compartmental models and multiple data sets. Overall, the proposed methodology is particularly suitable for resolving high-dimensional inference problems based on noisy electrophysiological data and, therefore, a potentially useful tool in
Fitting the Generic Multi-Parameter Crossover Model: Towards Realistic Scaling Estimates
Struzik, Z.R.; Dooijes, E.H.; Groen, F.C.A.
1997-01-01
The primary concern of fractal metrology is providing a means of reliable estimation of scaling exponents such as fractal dimension, in order to prove the null hypothesis that a particular object can be regarded as fractal. In the particular context to be discussed in this contribution, the central
Estimating FttH and FttCurb Deployment Costs Using Geometric Models with Enhanced Parameters
Phillipson, F.
2015-01-01
The need for higher bandwidth by customers urges the network providers to upgrade their networks. Fibre to the home or Fibre to the curb are two of the scenarios that are considered. To make a proper assessment on the economic viability, a good estimation of the roll-out costs of the networks are im
Biondi, Daniela; De Luca, Davide Luciano
2015-04-01
The use of rainfall-runoff models represents an alternative to statistical approaches (such as at-site or regional flood frequency analysis) for design flood estimation, and constitutes an answer to the increasing need for synthetic design hydrographs (SDHs) associated to a specific return period. However, the lack of streamflow observations and the consequent high uncertainty associated with parameter estimation, usually pose serious limitations to the use of process-based approaches in ungauged catchments, which in contrast represent the majority in practical applications. This work presents the application of a Bayesian procedure that, for a predefined rainfall-runoff model, allows for the assessment of posterior parameters distribution, using the limited and uncertain information available for the response of an ungauged catchment (Bulygina et al. 2009; 2011). The use of regional estimates of river flow statistics, interpreted as hydrological signatures that measure theoretically relevant system process behaviours (Gupta et al. 2008), within this framework represents a valuable option and has shown significant developments in recent literature to constrain the plausible model response and to reduce the uncertainty in ungauged basins. In this study we rely on the first three L-moments of annual streamflow maxima, for which regressions are available from previous studies (Biondi et al. 2012; Laio et al. 2011). The methodology was carried out for a catchment located in southern Italy, and used within a Monte Carlo scheme (MCs) considering both event-based and continuous simulation approaches for design flood estimation. The applied procedure offers promising perspectives to perform model calibration and uncertainty analysis in ungauged basins; moreover, in the context of design flood estimation, process-based methods coupled with MCs approach have the advantage of providing simulated floods uncertainty analysis that represents an asset in risk-based decision
Multi-Parameter Estimation for Orthorhombic Media
Masmoudi, Nabil
2015-08-19
Building reliable anisotropy models is crucial in seismic modeling, imaging and full waveform inversion. However, estimating anisotropy parameters is often hampered by the trade off between inhomogeneity and anisotropy. For instance, one way to estimate the anisotropy parameters is to relate them analytically to traveltimes, which is challenging in inhomogeneous media. Using perturbation theory, we develop travel-time approximations for orthorhombic media as explicit functions of the anellipticity parameters η1, η2 and a parameter Δγ in inhomogeneous background media. Specifically, our expansion assumes inhomogeneous ellipsoidal anisotropic background model, which can be obtained from well information and stacking velocity analysis. This approach has two main advantages: in one hand, it provides a computationally efficient tool to solve the orthorhombic eikonal equation, on the other hand, it provides a mechanism to scan for the best fitting anisotropy parameters without the need for repetitive modeling of traveltimes, because the coefficients of the traveltime expansion are independent of the perturbed parameters. Furthermore, the coefficients of the traveltime expansion provide insights on the sensitivity of the traveltime with respect to the perturbed parameters. We show the accuracy of the traveltime approximations as well as an approach for multi-parameter scanning in orthorhombic media.
Parameter uncertainty analysis for the annual phosphorus loss estimator (APLE) model
Technical abstract: Models are often used to predict phosphorus (P) loss from agricultural fields. While it is commonly recognized that model predictions are inherently uncertain, few studies have addressed prediction uncertainties using P loss models. In this study, we conduct an uncertainty analys...
Durner, Wolfgang; Schelle, Henrike; Schlüter, Steffen; Vogel, Hans-Jörg; Ippisch, Olaf; Vanderborght, Jan
2013-04-01
Soils are structured on multiple spatial scales, originating from inhomogeneities of the parent material, pedogenesis, soil organisms, plant roots, or tillage. This leads to heterogeneities that affect the local hydraulic properties and thus govern the flow behavior of water in soil. To assess the impact of individual or combined structural components on the water dynamics within a soil, complex 2D and 3D virtual realities, representing cultivated soils with spatial heterogeneity on multiple scales were constructed with a high spatial resolution by the interdisciplinary research group INVEST (virtual institute of the Helmholtz Association). At these systems, numerical simulations of water dynamics under different boundary conditions were performed. From the simulation results, datasets of water contents and matric heads, as are recorded in typical field campaigns were extracted. With these data, effective soil hydraulic properties were estimated by 1D inverse simulation, which were then used to predict the water balance. The results showed that measurements, particularly those of water contents, depended strongly on the measuring position and hence led to different estimates of the soil hydraulic properties. Nevertheless, in most cases, the average of the predicted water balances obtained from the 1D simulations and the estimated effective soil hydraulic properties agreed very well with those attained from the 2D systems. In contrast, when using data from only one observation profile, the calculation of the water balance was very uncertain.
Botto, Anna; Camporese, Matteo
2017-04-01
Hydrological models allow scientists to predict the response of water systems under varying forcing conditions. In particular, many physically-based integrated models were recently developed in order to understand the fundamental hydrological processes occurring at the catchment scale. However, the use of this class of hydrological models is still relatively limited, as their prediction skills heavily depend on reliable parameter estimation, an operation that is never trivial, being normally affected by large uncertainty and requiring huge computational effort. The objective of this work is to test the potential of data assimilation to be used as an inverse modeling procedure for the broad class of integrated hydrological models. To pursue this goal, a Bayesian data assimilation (DA) algorithm based on a Monte Carlo approach, namely the ensemble Kalman filter (EnKF), is combined with the CATchment HYdrology (CATHY) model. In this approach, input variables (atmospheric forcing, soil parameters, initial conditions) are statistically perturbed providing an ensemble of realizations aimed at taking into account the uncertainty involved in the process. Each realization is propagated forward by the CATHY hydrological model within a parallel R framework, developed to reduce the computational effort. When measurements are available, the EnKF is used to update both the system state and soil parameters. In particular, four different assimilation scenarios are applied to test the capability of the modeling framework: first only pressure head or water content are assimilated, then, the combination of both, and finally both pressure head and water content together with the subsurface outflow. To demonstrate the effectiveness of the approach in a real-world scenario, an artificial hillslope was designed and built to provide real measurements for the DA analyses. The experimental facility, located in the Department of Civil, Environmental and Architectural Engineering of the
Statistics of Parameter Estimates: A Concrete Example
Aguilar, Oscar
2015-01-01
© 2015 Society for Industrial and Applied Mathematics. Most mathematical models include parameters that need to be determined from measurements. The estimated values of these parameters and their uncertainties depend on assumptions made about noise levels, models, or prior knowledge. But what can we say about the validity of such estimates, and the influence of these assumptions? This paper is concerned with methods to address these questions, and for didactic purposes it is written in the context of a concrete nonlinear parameter estimation problem. We will use the results of a physical experiment conducted by Allmaras et al. at Texas A&M University [M. Allmaras et al., SIAM Rev., 55 (2013), pp. 149-167] to illustrate the importance of validation procedures for statistical parameter estimation. We describe statistical methods and data analysis tools to check the choices of likelihood and prior distributions, and provide examples of how to compare Bayesian results with those obtained by non-Bayesian methods based on different types of assumptions. We explain how different statistical methods can be used in complementary ways to improve the understanding of parameter estimates and their uncertainties.
Estimation of parameters in a distributed precipitation-runoff model for Norway
Beldring, S.; Engeland, K; Roald, L. A.; Sælthun, N. R.; A. Voksø
2003-01-01
A distributed version of the HBV-model using 1 km2 grid cells and daily time step was used to simulate runoff from the entire land surface of Norway for the period 1961-1990. The model was sensitive to changes in small scale properties of the land surface and the climatic input data, through explicit representation of differences between model elements, and by implicit consideration of sub-grid variations in moisture status. A geogra...
Parameter Estimation of Dynamic Multi-zone Models for Livestock Indoor Climate Control
DEFF Research Database (Denmark)
Wu, Zhuang; Stoustrup, Jakob; Heiselberg, Per
2008-01-01
In this paper, a multi-zone modeling concept is proposed based on a simplified energy balance formulation to provide a better prediction of the indoor horizontal temperature variation inside the livestock building. The developed mathematical models reflect the influences from the weather, the liv......In this paper, a multi-zone modeling concept is proposed based on a simplified energy balance formulation to provide a better prediction of the indoor horizontal temperature variation inside the livestock building. The developed mathematical models reflect the influences from the weather...
Directory of Open Access Journals (Sweden)
Douglas A. Fynan
2016-06-01
Full Text Available The Gaussian process model (GPM is a flexible surrogate model that can be used for nonparametric regression for multivariate problems. A unique feature of the GPM is that a prediction variance is automatically provided with the regression function. In this paper, we estimate the safety margin of a nuclear power plant by performing regression on the output of best-estimate simulations of a large-break loss-of-coolant accident with sampling of safety system configuration, sequence timing, technical specifications, and thermal hydraulic parameter uncertainties. The key aspect of our approach is that the GPM regression is only performed on the dominant input variables, the safety injection flow rate and the delay time for AC powered pumps to start representing sequence timing uncertainty, providing a predictive model for the peak clad temperature during a reflood phase. Other uncertainties are interpreted as contributors to the measurement noise of the code output and are implicitly treated in the GPM in the noise variance term, providing local uncertainty bounds for the peak clad temperature. We discuss the applicability of the foregoing method to reduce the use of conservative assumptions in best estimate plus uncertainty (BEPU and Level 1 probabilistic safety assessment (PSA success criteria definitions while dealing with a large number of uncertainties.
Energy Technology Data Exchange (ETDEWEB)
Fynan, Douglas A.; Ahn, Kwang Il [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2016-06-15
The Gaussian process model (GPM) is a flexible surrogate model that can be used for nonparametric regression for multivariate problems. A unique feature of the GPM is that a prediction variance is automatically provided with the regression function. In this paper, we estimate the safety margin of a nuclear power plant by performing regression on the output of best-estimate simulations of a large-break loss-of-coolant accident with sampling of safety system configuration, sequence timing, technical specifications, and thermal hydraulic parameter uncertainties. The key aspect of our approach is that the GPM regression is only performed on the dominant input variables, the safety injection flow rate and the delay time for AC powered pumps to start representing sequence timing uncertainty, providing a predictive model for the peak clad temperature during a reflood phase. Other uncertainties are interpreted as contributors to the measurement noise of the code output and are implicitly treated in the GPM in the noise variance term, providing local uncertainty bounds for the peak clad temperature. We discuss the applicability of the foregoing method to reduce the use of conservative assumptions in best estimate plus uncertainty (BEPU) and Level 1 probabilistic safety assessment (PSA) success criteria definitions while dealing with a large number of uncertainties.
Parameter Estimation of Inverter and Motor Model at Standstill using Measured Currents Only
DEFF Research Database (Denmark)
Rasmussen, Henrik; Knudsen, Morten; Tønnes, M.
1996-01-01
to the system is the reference values for the stator voltages given as duty cycles for the Pulse With Modulated power device. The system output is the measured stator currents. Three experiments are describedgiving respectively 1) the stator resistance and inverter parameters, 2) the stator transient inductance...... and 3) the referred rotor rotor resistance and magnetizing inductance. The method developed in the two last experiments is independent of the inverter nonlinearity. New methods for system identification concerning saturation of the magnetic flux are given and a reference value for the flux level...
Variational Estimation of Wave-affected Parameters in a Two-equation Turbulence Model
2014-01-01
time 2 intervals within the assimilation window, where ijT , and obsT are the simulated and 3 observed temperature at location i and time level j. N...16 downwelling. Journal of Physical Oceanography, 32: 2171-2193. 17 Agrawal, Y.C., Terray, E.A., Donelan, M.A., Hwang, P.A., Williams , A. J...Y., Drennan, W.M., Kahma, K., Williams III, A. 7 J., Hwang, P., Kitaigorodskii, S. A., 1996. Estimates of kinetic energy 8 dissipation under
DEFF Research Database (Denmark)
Ferrari, A.; Gutierrez, S.; Sin, Gürkan
2016-01-01
A steady state model for a production scale milk drying process was built to help process understanding and optimization studies. It involves a spray chamber and also internal/external fluid beds. The model was subjected to a comprehensive statistical analysis for quality assurance using sensitiv...
de Asis, Alejandro M.; Omasa, Kenji
Soil conservation planning often requires estimates of soil erosion at a catchment or regional scale. Predictive models such as Universal Soil Loss Equation (USLE) and its subsequent Revised Universal Soil Loss Equation (RUSLE) are useful tools to generate the quantitative estimates necessary for designing sound conservation measures. However, large-scale soil erosion model-factor parameterization and quantification is difficult due to the costs, labor and time involved. Among the soil erosion parameters, the vegetative cover or C factor has been one of the most difficult to estimate over broad geographic areas. The C factor represents the effects of vegetation canopy and ground covers in reducing soil loss. Traditional methods for the extraction of vegetation information from remote sensing data such as classification techniques and vegetation indices were found to be inaccurate. Thus, this study presents a new approach based on Spectral Mixture Analysis (SMA) of Landsat ETM data to map the C factor for use in the modeling of soil erosion. A desirable feature of SMA is that it estimates the fractional abundance of ground cover and bare soils simultaneously, which is appropriate for soil erosion analysis. Hence, we estimated the C factor by utilizing the results of SMA on a pixel-by-pixel basis. We specifically used a linear SMA (LSMA) model and performed a minimum noise fraction (MNF) transformation and pixel purity index (PPI) on Landsat ETM image to derive the proportion of ground cover (vegetation and non-photosynthetic materials) and bare soil within a pixel. The end-members were selected based on the purest pixels found using PPI with reference to very high-resolution QuickBird image and actual field data. Results showed that the C factor value estimated using LSMA correlated strongly with the values measured in the field. The correlation coefficient ( r) obtained was 0.94. A comparative analysis between NDVI- and LSMA-derived C factors also proved that the
砂沟滴灌技术参数模型%Model for estimating technical parameters of sand furrow irrigation
Institute of Scientific and Technical Information of China (English)
赵伟霞; 张振华; 蔡焕杰; 单志杰
2009-01-01
Based on the analysis of the soil infiltration rule of the Sand Furrow Irrigation (SFI), a model for estimating the technical parameters for SFI is established according to the water-balance principle. These technical parameters are the ditch width and height, the permeable boundary height, and the emitter discharge rate. The experiments about the constant-head well infiltration and the Philip model are used to estimate the soil infiltration capacity. Based on the technical parameters calculated by this model, the experiments about the SFI are carried out to verify the model's feasibility. The factors affecting the model's accuracy are also analyzed. The results indicate that the model has the high precision to design the technical parameters for the SFI.The soil water sorption and soil water transmissivity are the function of the time in the two-dimensional constant-head well infiltration. The changing water depth and wetted perimeter are other factors affecting the model' s accuracy.%在分析砂沟滴灌土壤入渗规律的基础上,根据水量平衡原理建立了求解砂沟滴灌技术参数(沟宽、沟高、透水边壁高度、滴头流量)的模型,并将恒定水头积水入渗试验和Philip模型用于求解模型中的土壤入渗性能.以该模型计算的技术参数为基础,通过砂沟滴灌试验验证了模型的可行性,并对影响模型精度的因素做出了分析.试验结果表明,该模型能够较精确地设计砂沟滴灌技术参数.二维恒定水头积水入渗试验中,土壤吸渗率和渗透率是灌水时间的函数,砂沟滴灌中变化的积水深度和湿周是影响模型精度的另一个因素.
Characterization of uncertainty and sensitivity of model parameters is an essential and often overlooked facet of hydrological modeling. This paper introduces an algorithm called MOESHA that combines input parameter sensitivity analyses with a genetic algorithm calibration routin...
On closure parameter estimation in chaotic systems
Directory of Open Access Journals (Sweden)
J. Hakkarainen
2012-02-01
Full Text Available Many dynamical models, such as numerical weather prediction and climate models, contain so called closure parameters. These parameters usually appear in physical parameterizations of sub-grid scale processes, and they act as "tuning handles" of the models. Currently, the values of these parameters are specified mostly manually, but the increasing complexity of the models calls for more algorithmic ways to perform the tuning. Traditionally, parameters of dynamical systems are estimated by directly comparing the model simulations to observed data using, for instance, a least squares approach. However, if the models are chaotic, the classical approach can be ineffective, since small errors in the initial conditions can lead to large, unpredictable deviations from the observations. In this paper, we study numerical methods available for estimating closure parameters in chaotic models. We discuss three techniques: off-line likelihood calculations using filtering methods, the state augmentation method, and the approach that utilizes summary statistics from long model simulations. The properties of the methods are studied using a modified version of the Lorenz 95 system, where the effect of fast variables are described using a simple parameterization.
Directory of Open Access Journals (Sweden)
Maria Gabriela Campolina Diniz Peixoto
2014-05-01
Full Text Available The objective of this work was to compare random regression models for the estimation of genetic parameters for Guzerat milk production, using orthogonal Legendre polynomials. Records (20,524 of test-day milk yield (TDMY from 2,816 first-lactation Guzerat cows were used. TDMY grouped into 10-monthly classes were analyzed for additive genetic effect and for environmental and residual permanent effects (random effects, whereas the contemporary group, calving age (linear and quadratic effects and mean lactation curve were analized as fixed effects. Trajectories for the additive genetic and permanent environmental effects were modeled by means of a covariance function employing orthogonal Legendre polynomials ranging from the second to the fifth order. Residual variances were considered in one, four, six, or ten variance classes. The best model had six residual variance classes. The heritability estimates for the TDMY records varied from 0.19 to 0.32. The random regression model that used a second-order Legendre polynomial for the additive genetic effect, and a fifth-order polynomial for the permanent environmental effect is adequate for comparison by the main employed criteria. The model with a second-order Legendre polynomial for the additive genetic effect, and that with a fourth-order for the permanent environmental effect could also be employed in these analyses.
MODFLOW-style parameters in underdetermined parameter estimation
D'Oria, Marco D.; Fienen, Michael N.
2012-01-01
In this article, we discuss the use of MODFLOW-Style parameters in the numerical codes MODFLOW_2005 and MODFLOW_2005-Adjoint for the definition of variables in the Layer Property Flow package. Parameters are a useful tool to represent aquifer properties in both codes and are the only option available in the adjoint version. Moreover, for overdetermined parameter estimation problems, the parameter approach for model input can make data input easier. We found that if each estimable parameter is defined by one parameter, the codes require a large computational effort and substantial gains in efficiency are achieved by removing logical comparison of character strings that represent the names and types of the parameters. An alternative formulation already available in the current implementation of the code can also alleviate the efficiency degradation due to character comparisons in the special case of distributed parameters defined through multiplication matrices. The authors also hope that lessons learned in analyzing the performance of the MODFLOW family codes will be enlightening to developers of other Fortran implementations of numerical codes.
Lu, H.; Koike, T.; Yang, K.; Li, X.; Graf, T.; Boussetta, S.; Tsutsui, H.; Kuria, D. N.
2007-12-01
The estimation of soil moisture and surface energy fluxes at various temporal and spatial scales remains to be an outstanding problem in hydrologic and meteorological researches. Remote sensed data retrieval algorithms, land surface models and data assimilation systems are highly expected to provide a solution to this problem. But the parameters required by those algorithms and systems, such as the soil texture, porosity, roughness parameters and so on, are highly variable or unavailable. In this study, a land data assimilation system (LDAS- UT) is employed to inversely estimate the optimal values of those land surface parameters with meteorological forcing data and remote sensed data. And a field experiment is designed to provide a well-controlled data set for the system validation. The Tanashi experiment has been in operation since November, 2006 in the farm of the University of Tokyo. Continuous ground measurements of meteorological variables, soil moisture and temperature profiles and vegetation status have been taken over a plot, in which winter wheat was planted. At the same time, the ground based microwave radiometers (GBMR) are employed to provide accurate field measurements of brightness temperature up-welling from the plot, at the frequencies of 6.925, 10.65, 18.7, 23.8, 36.5 and 89 GHz. The LDAS_UT is then run with using data obtained from this experiment to retrieval parameters for two periods. One is the period from December 2006 to February 2007, the germination period of winter wheat, and during which the vegetation effects are small. The second period is from April to May 2007, during which the winter wheat was developing rapidly. The optimize parameters were compared with the in situ observed ¡®real' ones. It found that, for the first period, the retrieved parameters are close to the ¡®real' values, while for the second period, the gap between the retrieved parameters and the ¡®real' values are much bigger. The difference between the
A Statistical Model for Estimation of Ichthyofauna Quality Based on Water Parameters in Oituz Bazin
Directory of Open Access Journals (Sweden)
Popescu Carmen
2015-06-01
Full Text Available Fish represents an important food source for people worldwide. Moreover, although considered a very old occupation, fishing continues to provide jobs, especially for the people living in the coastal countries. The quality of surface waters affects the quality of fish as a food source. For this reason, the present study aims to assess the quality of the ichthyofauna in the Oituz River and some of its tributaries using several parameters that have been computed based on the biometric data of the biological material gathered during 2004-2008, in correlation with the water pH and water temperature. The present paper also highlights some observations regarding the changes of the analyzed ecosystems, as well as some recommendations regarding the fish consumption in the studied basin, considered as a food source for humans.
Pyrolysis of waste tires: A modeling and parameter estimation study using Aspen Plus(®).
Ismail, Hamza Y; Abbas, Ali; Azizi, Fouad; Zeaiter, Joseph
2017-02-01
This paper presents a simulation flowsheet model of a waste tire pyrolysis process with feed capacity of 150kg/h. A kinetic rate-based reaction model is formulated in a form implementable in the simulation package Aspen Plus, giving the flowsheet model the capability to predict more than 110 tire pyrolysis products as reported in experiments by Laresgoiti et al. (2004) and Williams (2013) for the oil and gas products respectively. The simulation model is successfully validated in two stages: firstly against experimental data from Olazar et al. (2008) by comparing the mass fractions for the oil products (gas, liquids (non-aromatics), aromatics, and tar) at temperatures of 425, 500, 550 and 610°C, and secondly against experimental results of main hydrocarbon products (C7 to C15) obtained by Laresgoiti et al. (2004) at temperatures of 400, 500, 600, and 700°C. The model was then used to analyze the effect of pyrolysis process temperature and showed that increased temperatures led to chain fractions from C10 and higher to decrease while smaller chains increased; this is attributed to the extensive cracking of the larger hydrocarbon chains at higher temperatures. The utility of the flowsheet model was highlighted through an energy analysis that targeted power efficiency of the process determined through production profiles of gasoline and diesel at various temperatures. This shows, through the summation of the net power gain from the plant for gasoline plus diesel that the maximum net power lies at the lower temperatures corresponding to minimum production of gasoline and maximum production of diesel. This simulation model can thus serve as a robust tool to respond to market conditions that dictate fuel demand and prices while at the same time identifying optimum process conditions (e.g. temperature) driven by process economics.
Parameter estimation of harmonic polluting industrial loads
Energy Technology Data Exchange (ETDEWEB)
Maza-Ortega, J.M.; Gomez-Exposito, A.; Trigo-Garcia, J.L.; Burgos-Payan, M. [University of Sevilla, Sevilla (Spain). Department of Electrical Engineering
2005-12-01
This paper develops a methodology for the estimation of relevant parameters characterizing harmonic polluting industrial loads through a set of measurements acquired at the point of common coupling. The proposed method is capable of obtaining an accurate load model in absence of detailed information about its internal structure and composition. (author)
Ames, D. P.; Osorio-Murillo, C.; Over, M. W.; Rubin, Y.
2012-12-01
The Method of Anchored Distributions (MAD) is an inverse modeling technique that is well-suited for estimation of spatially varying parameter fields using limited observations and Bayesian methods. This presentation will discuss the design, development, and testing of a free software implementation of the MAD technique using the open source DotSpatial geographic information system (GIS) framework, R statistical software, and the MODFLOW groundwater model. This new tool, dubbed MAD-GIS, is built using a modular architecture that supports the integration of external analytical tools and models for key computational processes including a forward model (e.g. MODFLOW, HYDRUS) and geostatistical analysis (e.g. R, GSLIB). The GIS-based graphical user interface provides a relatively simple way for new users of the technique to prepare the spatial domain, to identify observation and anchor points, to perform the MAD analysis using a selected forward model, and to view results. MAD-GIS uses the Managed Extensibility Framework (MEF) provided by the Microsoft .NET programming platform to support integration of different modeling and analytical tools at run-time through a custom "driver." Each driver establishes a connection with external programs through a programming interface, which provides the elements for communicating with core MAD software. This presentation gives an example of adapting the MODFLOW to serve as the external forward model in MAD-GIS for inferring the distribution functions of key MODFLOW parameters. Additional drivers for other models are being developed and it is expected that the open source nature of the project will engender the development of additional model drivers by 3rd party scientists.
Interval Estimation of Seismic Hazard Parameters
Orlecka-Sikora, Beata; Lasocki, Stanislaw
2016-11-01
The paper considers Poisson temporal occurrence of earthquakes and presents a way to integrate uncertainties of the estimates of mean activity rate and magnitude cumulative distribution function in the interval estimation of the most widely used seismic hazard functions, such as the exceedance probability and the mean return period. The proposed algorithm can be used either when the Gutenberg-Richter model of magnitude distribution is accepted or when the nonparametric estimation is in use. When the Gutenberg-Richter model of magnitude distribution is used the interval estimation of its parameters is based on the asymptotic normality of the maximum likelihood estimator. When the nonparametric kernel estimation of magnitude distribution is used, we propose the iterated bias corrected and accelerated method for interval estimation based on the smoothed bootstrap and second-order bootstrap samples. The changes resulted from the integrated approach in the interval estimation of the seismic hazard functions with respect to the approach, which neglects the uncertainty of the mean activity rate estimates have been studied using Monte Carlo simulations and two real dataset examples. The results indicate that the uncertainty of mean activity rate affects significantly the interval estimates of hazard functions only when the product of activity rate and the time period, for which the hazard is estimated, is no more than 5.0. When this product becomes greater than 5.0, the impact of the uncertainty of cumulative distribution function of magnitude dominates the impact of the uncertainty of mean activity rate in the aggregated uncertainty of the hazard functions. Following, the interval estimates with and without inclusion of the uncertainty of mean activity rate converge. The presented algorithm is generic and can be applied also to capture the propagation of uncertainty of estimates, which are parameters of a multiparameter function, onto this function.
2011-11-01
Nuovo Cimento della Societa Italiana di Fisica : B, 57:283–318, 1980. [18] W. Navidi. Statistics for Engineers and Scientists. McGraw Hill, Boston, 2008...homogenized energy model for hysteresis in ferro- electric materials: General density formulation. Journal of Intelligent Material Systems and
GOSSCUSTARD, JD; CLARKE, RT; BRIGGS, KB; ENS, BJ; EXO, KM; SMIT, C; BEINTEMA, AJ; CALDOW, RWG; CATT, DC; CLARK, NA; DURELL, SEALD; HARRIS, MP; HULSCHER, JB; MEININGER, PL; PICOZZI, N; PRYSJONES, R; SAFRIEL, UN; WEST, AD
1995-01-01
1. In order to construct a model to predict the effect of winter habitat loss on the migratory population of the European subspecies of the oystercatcher, Haematopus ostralegus ostralegus, data on the reproductive and mortality rates collected throughout Europe over the last 60 years are reviewed. W
Bayesian parameter estimation in the Expectancy Valence model of the Iowa gamblling task
Wetzels, R.; Vandekerckhove, J.; Tuerlinckx, F.; Wagenmakers, E.-J.
2010-01-01
The purpose of the popular Iowa gambling task is to study decision making deficits in clinical populations by mimicking real-life decision making in an experimental context. Busemeyer and Stout [Busemeyer, J. R., & Stout, J. C. (2002). A contribution of cognitive decision models to clinical assessme
Dam, van J.C.
2000-01-01
Water flow and solute transport in top soils are important elements in many environmental studies. The agro- and ecohydrological model SWAP (Soil-Water-Plant-Atmosphere) has been developed to simulate simultaneously water flow, solute transport, heat flow and crop growth at field scale level. The ma
Multiple nonlinear parameter estimation using PI feedback control
Lith, van P. F.; Witteveen, H.; Betlem, B.H.L.; Roffel, B.
2001-01-01
Nonlinear parameters often need to be estimated during the building of chemical process models. To accomplish this, many techniques are available. This paper discusses an alternative view to parameter estimation, where the concept of PI feedback control is used to estimate model parameters. The appr
El Gharamti, Mohamad
2014-09-01
Reactive contaminant transport models are used by hydrologists to simulate and study the migration and fate of industrial waste in subsurface aquifers. Accurate transport modeling of such waste requires clear understanding of the system\\'s parameters, such as sorption and biodegradation. In this study, we present an efficient sequential data assimilation scheme that computes accurate estimates of aquifer contamination and spatially variable sorption coefficients. This assimilation scheme is based on a hybrid formulation of the ensemble Kalman filter (EnKF) and optimal interpolation (OI) in which solute concentration measurements are assimilated via a recursive dual estimation of sorption coefficients and contaminant state variables. This hybrid EnKF-OI scheme is used to mitigate background covariance limitations due to ensemble under-sampling and neglected model errors. Numerical experiments are conducted with a two-dimensional synthetic aquifer in which cobalt-60, a radioactive contaminant, is leached in a saturated heterogeneous clayey sandstone zone. Assimilation experiments are investigated under different settings and sources of model and observational errors. Simulation results demonstrate that the proposed hybrid EnKF-OI scheme successfully recovers both the contaminant and the sorption rate and reduces their uncertainties. Sensitivity analyses also suggest that the adaptive hybrid scheme remains effective with small ensembles, allowing to reduce the ensemble size by up to 80% with respect to the standard EnKF scheme. © 2014 Elsevier Ltd.
Demissie, Henok K.; Bacopoulos, Peter
2017-05-01
A rich dataset of time- and space-varying velocity measurements for a macrotidal estuary was used in the development of a vector-based formulation of bottom roughness in the Advanced Circulation (ADCIRC) model. The updates to the parallel code of ADCIRC to include directionally based drag coefficient are briefly discussed in the paper, followed by an application of the data assimilation (nudging analysis) to the lower St. Johns River (northeastern Florida) for parameter estimatio