On parameter estimation in deformable models
DEFF Research Database (Denmark)
Fisker, Rune; Carstensen, Jens Michael
1998-01-01
Deformable templates have been intensively studied in image analysis through the last decade, but despite its significance the estimation of model parameters has received little attention. We present a method for supervised and unsupervised model parameter estimation using a general Bayesian form...
Parameter Estimation of Partial Differential Equation Models
Xun, Xiaolei
2013-09-01
Partial differential equation (PDE) models are commonly used to model complex dynamic systems in applied sciences such as biology and finance. The forms of these PDE models are usually proposed by experts based on their prior knowledge and understanding of the dynamic system. Parameters in PDE models often have interesting scientific interpretations, but their values are often unknown and need to be estimated from the measurements of the dynamic system in the presence of measurement errors. Most PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus the computational load is high. In this article, we propose two methods to estimate parameters in PDE models: a parameter cascading method and a Bayesian approach. In both methods, the underlying dynamic process modeled with the PDE model is represented via basis function expansion. For the parameter cascading method, we develop two nested levels of optimization to estimate the PDE parameters. For the Bayesian method, we develop a joint model for data and the PDE and develop a novel hierarchical model allowing us to employ Markov chain Monte Carlo (MCMC) techniques to make posterior inference. Simulation studies show that the Bayesian method and parameter cascading method are comparable, and both outperform other available methods in terms of estimation accuracy. The two methods are demonstrated by estimating parameters in a PDE model from long-range infrared light detection and ranging data. Supplementary materials for this article are available online. © 2013 American Statistical Association.
Modelling and parameter estimation of dynamic systems
Raol, JR; Singh, J
2004-01-01
Parameter estimation is the process of using observations from a system to develop mathematical models that adequately represent the system dynamics. The assumed model consists of a finite set of parameters, the values of which are calculated using estimation techniques. Most of the techniques that exist are based on least-square minimization of error between the model response and actual system response. However, with the proliferation of high speed digital computers, elegant and innovative techniques like filter error method, H-infinity and Artificial Neural Networks are finding more and mor
Parameter Estimation for Thurstone Choice Models
Energy Technology Data Exchange (ETDEWEB)
Vojnovic, Milan [London School of Economics (United Kingdom); Yun, Seyoung [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-04-24
We consider the estimation accuracy of individual strength parameters of a Thurstone choice model when each input observation consists of a choice of one item from a set of two or more items (so called top-1 lists). This model accommodates the well-known choice models such as the Luce choice model for comparison sets of two or more items and the Bradley-Terry model for pair comparisons. We provide a tight characterization of the mean squared error of the maximum likelihood parameter estimator. We also provide similar characterizations for parameter estimators defined by a rank-breaking method, which amounts to deducing one or more pair comparisons from a comparison of two or more items, assuming independence of these pair comparisons, and maximizing a likelihood function derived under these assumptions. We also consider a related binary classification problem where each individual parameter takes value from a set of two possible values and the goal is to correctly classify all items within a prescribed classification error. The results of this paper shed light on how the parameter estimation accuracy depends on given Thurstone choice model and the structure of comparison sets. In particular, we found that for unbiased input comparison sets of a given cardinality, when in expectation each comparison set of given cardinality occurs the same number of times, for a broad class of Thurstone choice models, the mean squared error decreases with the cardinality of comparison sets, but only marginally according to a diminishing returns relation. On the other hand, we found that there exist Thurstone choice models for which the mean squared error of the maximum likelihood parameter estimator can decrease much faster with the cardinality of comparison sets. We report empirical evaluation of some claims and key parameters revealed by theory using both synthetic and real-world input data from some popular sport competitions and online labor platforms.
Robust estimation of hydrological model parameters
Directory of Open Access Journals (Sweden)
A. Bárdossy
2008-11-01
Full Text Available The estimation of hydrological model parameters is a challenging task. With increasing capacity of computational power several complex optimization algorithms have emerged, but none of the algorithms gives a unique and very best parameter vector. The parameters of fitted hydrological models depend upon the input data. The quality of input data cannot be assured as there may be measurement errors for both input and state variables. In this study a methodology has been developed to find a set of robust parameter vectors for a hydrological model. To see the effect of observational error on parameters, stochastically generated synthetic measurement errors were applied to observed discharge and temperature data. With this modified data, the model was calibrated and the effect of measurement errors on parameters was analysed. It was found that the measurement errors have a significant effect on the best performing parameter vector. The erroneous data led to very different optimal parameter vectors. To overcome this problem and to find a set of robust parameter vectors, a geometrical approach based on Tukey's half space depth was used. The depth of the set of N randomly generated parameters was calculated with respect to the set with the best model performance (Nash-Sutclife efficiency was used for this study for each parameter vector. Based on the depth of parameter vectors, one can find a set of robust parameter vectors. The results show that the parameters chosen according to the above criteria have low sensitivity and perform well when transfered to a different time period. The method is demonstrated on the upper Neckar catchment in Germany. The conceptual HBV model was used for this study.
DEFF Research Database (Denmark)
Sales-Cruz, Mauricio; Heitzig, Martina; Cameron, Ian
2011-01-01
In this chapter the importance of parameter estimation in model development is illustrated through various applications related to reaction systems. In particular, rate constants in a reaction system are obtained through parameter estimation methods. These approaches often require the application...... of algebraic equations as the basis for parameter estimation.These approaches are illustrated using estimations of kinetic constants from reaction system models....
Parameter estimation in fractional diffusion models
Kubilius, Kęstutis; Ralchenko, Kostiantyn
2017-01-01
This book is devoted to parameter estimation in diffusion models involving fractional Brownian motion and related processes. For many years now, standard Brownian motion has been (and still remains) a popular model of randomness used to investigate processes in the natural sciences, financial markets, and the economy. The substantial limitation in the use of stochastic diffusion models with Brownian motion is due to the fact that the motion has independent increments, and, therefore, the random noise it generates is “white,” i.e., uncorrelated. However, many processes in the natural sciences, computer networks and financial markets have long-term or short-term dependences, i.e., the correlations of random noise in these processes are non-zero, and slowly or rapidly decrease with time. In particular, models of financial markets demonstrate various kinds of memory and usually this memory is modeled by fractional Brownian diffusion. Therefore, the book constructs diffusion models with memory and provides s...
Parameter and Uncertainty Estimation in Groundwater Modelling
DEFF Research Database (Denmark)
Jensen, Jacob Birk
The data basis on which groundwater models are constructed is in general very incomplete, and this leads to uncertainty in model outcome. Groundwater models form the basis for many, often costly decisions and if these are to be made on solid grounds, the uncertainty attached to model results must...... be quantified. This study was motivated by the need to estimate the uncertainty involved in groundwater models.Chapter 2 presents an integrated surface/subsurface unstructured finite difference model that was developed and applied to a synthetic case study.The following two chapters concern calibration...... was applied.Capture zone modelling was conducted on a synthetic stationary 3-dimensional flow problem involving river, surface and groundwater flow. Simulated capture zones were illustrated as likelihood maps and compared with a deterministic capture zones derived from a reference model. The results showed...
Estimation of Kinetic Parameters in an Automotive SCR Catalyst Model
DEFF Research Database (Denmark)
Åberg, Andreas; Widd, Anders; Abildskov, Jens
2016-01-01
A challenge during the development of models for simulation of the automotive Selective Catalytic Reduction catalyst is the parameter estimation of the kinetic parameters, which can be time consuming and problematic. The parameter estimation is often carried out on small-scale reactor tests...
Incremental parameter estimation of kinetic metabolic network models
Directory of Open Access Journals (Sweden)
Jia Gengjie
2012-11-01
Full Text Available Abstract Background An efficient and reliable parameter estimation method is essential for the creation of biological models using ordinary differential equation (ODE. Most of the existing estimation methods involve finding the global minimum of data fitting residuals over the entire parameter space simultaneously. Unfortunately, the associated computational requirement often becomes prohibitively high due to the large number of parameters and the lack of complete parameter identifiability (i.e. not all parameters can be uniquely identified. Results In this work, an incremental approach was applied to the parameter estimation of ODE models from concentration time profiles. Particularly, the method was developed to address a commonly encountered circumstance in the modeling of metabolic networks, where the number of metabolic fluxes (reaction rates exceeds that of metabolites (chemical species. Here, the minimization of model residuals was performed over a subset of the parameter space that is associated with the degrees of freedom in the dynamic flux estimation from the concentration time-slopes. The efficacy of this method was demonstrated using two generalized mass action (GMA models, where the method significantly outperformed single-step estimations. In addition, an extension of the estimation method to handle missing data is also presented. Conclusions The proposed incremental estimation method is able to tackle the issue on the lack of complete parameter identifiability and to significantly reduce the computational efforts in estimating model parameters, which will facilitate kinetic modeling of genome-scale cellular metabolism in the future.
A simulation of water pollution model parameter estimation
Kibler, J. F.
1976-01-01
A parameter estimation procedure for a water pollution transport model is elaborated. A two-dimensional instantaneous-release shear-diffusion model serves as representative of a simple transport process. Pollution concentration levels are arrived at via modeling of a remote-sensing system. The remote-sensed data are simulated by adding Gaussian noise to the concentration level values generated via the transport model. Model parameters are estimated from the simulated data using a least-squares batch processor. Resolution, sensor array size, and number and location of sensor readings can be found from the accuracies of the parameter estimates.
Modeling and Parameter Estimation of a Small Wind Generation System
Directory of Open Access Journals (Sweden)
Carlos A. Ramírez Gómez
2013-11-01
Full Text Available The modeling and parameter estimation of a small wind generation system is presented in this paper. The system consists of a wind turbine, a permanent magnet synchronous generator, a three phase rectifier, and a direct current load. In order to estimate the parameters wind speed data are registered in a weather station located in the Fraternidad Campus at ITM. Wind speed data were applied to a reference model programed with PSIM software. From that simulation, variables were registered to estimate the parameters. The wind generation system model together with the estimated parameters is an excellent representation of the detailed model, but the estimated model offers a higher flexibility than the programed model in PSIM software.
Models for estimating photosynthesis parameters from in situ production profiles
Kovač, Žarko; Platt, Trevor; Sathyendranath, Shubha; Antunović, Suzana
2017-12-01
The rate of carbon assimilation in phytoplankton primary production models is mathematically prescribed with photosynthesis irradiance functions, which convert a light flux (energy) into a material flux (carbon). Information on this rate is contained in photosynthesis parameters: the initial slope and the assimilation number. The exactness of parameter values is crucial for precise calculation of primary production. Here we use a model of the daily production profile based on a suite of photosynthesis irradiance functions and extract photosynthesis parameters from in situ measured daily production profiles at the Hawaii Ocean Time-series station Aloha. For each function we recover parameter values, establish parameter distributions and quantify model skill. We observe that the choice of the photosynthesis irradiance function to estimate the photosynthesis parameters affects the magnitudes of parameter values as recovered from in situ profiles. We also tackle the problem of parameter exchange amongst the models and the effect it has on model performance. All models displayed little or no bias prior to parameter exchange, but significant bias following parameter exchange. The best model performance resulted from using optimal parameter values. Model formulation was extended further by accounting for spectral effects and deriving a spectral analytical solution for the daily production profile. The daily production profile was also formulated with time dependent growing biomass governed by a growth equation. The work on parameter recovery was further extended by exploring how to extract photosynthesis parameters from information on watercolumn production. It was demonstrated how to estimate parameter values based on a linearization of the full analytical solution for normalized watercolumn production and from the solution itself, without linearization. The paper complements previous works on photosynthesis irradiance models by analysing the skill and consistency of
Parameters Estimation of Geographically Weighted Ordinal Logistic Regression (GWOLR) Model
Zuhdi, Shaifudin; Retno Sari Saputro, Dewi; Widyaningsih, Purnami
2017-06-01
A regression model is the representation of relationship between independent variable and dependent variable. The dependent variable has categories used in the logistic regression model to calculate odds on. The logistic regression model for dependent variable has levels in the logistics regression model is ordinal. GWOLR model is an ordinal logistic regression model influenced the geographical location of the observation site. Parameters estimation in the model needed to determine the value of a population based on sample. The purpose of this research is to parameters estimation of GWOLR model using R software. Parameter estimation uses the data amount of dengue fever patients in Semarang City. Observation units used are 144 villages in Semarang City. The results of research get GWOLR model locally for each village and to know probability of number dengue fever patient categories.
Parameter Estimation for the Thurstone Case III Model.
Mackay, David B.; Chaiy, Seoil
1982-01-01
The ability of three estimation criteria to recover parameters of the Thurstone Case V and Case III models from comparative judgment data was investigated via Monte Carlo techniques. Significant differences in recovery are shown to exist. (Author/JKS)
A distributed approach for parameters estimation in System Biology models
International Nuclear Information System (INIS)
Mosca, E.; Merelli, I.; Alfieri, R.; Milanesi, L.
2009-01-01
Due to the lack of experimental measurements, biological variability and experimental errors, the value of many parameters of the systems biology mathematical models is yet unknown or uncertain. A possible computational solution is the parameter estimation, that is the identification of the parameter values that determine the best model fitting respect to experimental data. We have developed an environment to distribute each run of the parameter estimation algorithm on a different computational resource. The key feature of the implementation is a relational database that allows the user to swap the candidate solutions among the working nodes during the computations. The comparison of the distributed implementation with the parallel one showed that the presented approach enables a faster and better parameter estimation of systems biology models.
Estimation of shape model parameters for 3D surfaces
DEFF Research Database (Denmark)
Erbou, Søren Gylling Hemmingsen; Darkner, Sune; Fripp, Jurgen
2008-01-01
is applied to a database of 3D surfaces from a section of the porcine pelvic bone extracted from 33 CT scans. A leave-one-out validation shows that the parameters of the first 3 modes of the shape model can be predicted with a mean difference within [-0.01,0.02] from the true mean, with a standard deviation......Statistical shape models are widely used as a compact way of representing shape variation. Fitting a shape model to unseen data enables characterizing the data in terms of the model parameters. In this paper a Gauss-Newton optimization scheme is proposed to estimate shape model parameters of 3D...... surfaces using distance maps, which enables the estimation of model parameters without the requirement of point correspondence. For applications with acquisition limitations such as speed and cost, this formulation enables the fitting of a statistical shape model to arbitrarily sampled data. The method...
Bayesian estimation of parameters in a regional hydrological model
Directory of Open Access Journals (Sweden)
K. Engeland
2002-01-01
Full Text Available This study evaluates the applicability of the distributed, process-oriented Ecomag model for prediction of daily streamflow in ungauged basins. The Ecomag model is applied as a regional model to nine catchments in the NOPEX area, using Bayesian statistics to estimate the posterior distribution of the model parameters conditioned on the observed streamflow. The distribution is calculated by Markov Chain Monte Carlo (MCMC analysis. The Bayesian method requires formulation of a likelihood function for the parameters and three alternative formulations are used. The first is a subjectively chosen objective function that describes the goodness of fit between the simulated and observed streamflow, as defined in the GLUE framework. The second and third formulations are more statistically correct likelihood models that describe the simulation errors. The full statistical likelihood model describes the simulation errors as an AR(1 process, whereas the simple model excludes the auto-regressive part. The statistical parameters depend on the catchments and the hydrological processes and the statistical and the hydrological parameters are estimated simultaneously. The results show that the simple likelihood model gives the most robust parameter estimates. The simulation error may be explained to a large extent by the catchment characteristics and climatic conditions, so it is possible to transfer knowledge about them to ungauged catchments. The statistical models for the simulation errors indicate that structural errors in the model are more important than parameter uncertainties. Keywords: regional hydrological model, model uncertainty, Bayesian analysis, Markov Chain Monte Carlo analysis
Parameter estimation in stochastic rainfall-runoff models
DEFF Research Database (Denmark)
Jonsdottir, Harpa; Madsen, Henrik; Palsson, Olafur Petur
2006-01-01
A parameter estimation method for stochastic rainfall-runoff models is presented. The model considered in the paper is a conceptual stochastic model, formulated in continuous-discrete state space form. The model is small and a fully automatic optimization is, therefore, possible for estimating all....... For a comparison the parameters are also estimated by an output error method, where the sum of squared simulation error is minimized. The former methodology is optimal for short-term prediction whereas the latter is optimal for simulations. Hence, depending on the purpose it is possible to select whether...... the parameter values are optimal for simulation or prediction. The data originates from Iceland and the model is designed for Icelandic conditions, including a snow routine for mountainous areas. The model demands only two input data series, precipitation and temperature and one output data series...
Parameter Estimation for a Computable General Equilibrium Model
DEFF Research Database (Denmark)
Arndt, Channing; Robinson, Sherman; Tarp, Finn
We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of nonlinear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...
Parameter Estimation for a Computable General Equilibrium Model
DEFF Research Database (Denmark)
Arndt, Channing; Robinson, Sherman; Tarp, Finn
2002-01-01
We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of non-linear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...
Improving the realism of hydrologic model through multivariate parameter estimation
Rakovec, Oldrich; Kumar, Rohini; Attinger, Sabine; Samaniego, Luis
2017-04-01
Increased availability and quality of near real-time observations should improve understanding of predictive skills of hydrological models. Recent studies have shown the limited capability of river discharge data alone to adequately constrain different components of distributed model parameterizations. In this study, the GRACE satellite-based total water storage (TWS) anomaly is used to complement the discharge data with an aim to improve the fidelity of mesoscale hydrologic model (mHM) through multivariate parameter estimation. The study is conducted in 83 European basins covering a wide range of hydro-climatic regimes. The model parameterization complemented with the TWS anomalies leads to statistically significant improvements in (1) discharge simulations during low-flow period, and (2) evapotranspiration estimates which are evaluated against independent (FLUXNET) data. Overall, there is no significant deterioration in model performance for the discharge simulations when complemented by information from the TWS anomalies. However, considerable changes in the partitioning of precipitation into runoff components are noticed by in-/exclusion of TWS during the parameter estimation. A cross-validation test carried out to assess the transferability and robustness of the calibrated parameters to other locations further confirms the benefit of complementary TWS data. In particular, the evapotranspiration estimates show more robust performance when TWS data are incorporated during the parameter estimation, in comparison with the benchmark model constrained against discharge only. This study highlights the value for incorporating multiple data sources during parameter estimation to improve the overall realism of hydrologic model and its applications over large domains. Rakovec, O., Kumar, R., Attinger, S. and Samaniego, L. (2016): Improving the realism of hydrologic model functioning through multivariate parameter estimation. Water Resour. Res., 52, http://dx.doi.org/10
Parameter estimation and model selection in computational biology.
Directory of Open Access Journals (Sweden)
Gabriele Lillacci
2010-03-01
Full Text Available A central challenge in computational modeling of biological systems is the determination of the model parameters. Typically, only a fraction of the parameters (such as kinetic rate constants are experimentally measured, while the rest are often fitted. The fitting process is usually based on experimental time course measurements of observables, which are used to assign parameter values that minimize some measure of the error between these measurements and the corresponding model prediction. The measurements, which can come from immunoblotting assays, fluorescent markers, etc., tend to be very noisy and taken at a limited number of time points. In this work we present a new approach to the problem of parameter selection of biological models. We show how one can use a dynamic recursive estimator, known as extended Kalman filter, to arrive at estimates of the model parameters. The proposed method follows. First, we use a variation of the Kalman filter that is particularly well suited to biological applications to obtain a first guess for the unknown parameters. Secondly, we employ an a posteriori identifiability test to check the reliability of the estimates. Finally, we solve an optimization problem to refine the first guess in case it should not be accurate enough. The final estimates are guaranteed to be statistically consistent with the measurements. Furthermore, we show how the same tools can be used to discriminate among alternate models of the same biological process. We demonstrate these ideas by applying our methods to two examples, namely a model of the heat shock response in E. coli, and a model of a synthetic gene regulation system. The methods presented are quite general and may be applied to a wide class of biological systems where noisy measurements are used for parameter estimation or model selection.
Sample Size and Item Parameter Estimation Precision When Utilizing the One-Parameter "Rasch" Model
Custer, Michael
2015-01-01
This study examines the relationship between sample size and item parameter estimation precision when utilizing the one-parameter model. Item parameter estimates are examined relative to "true" values by evaluating the decline in root mean squared deviation (RMSD) and the number of outliers as sample size increases. This occurs across…
Procedures for parameter estimates of computational models for localized failure
Iacono, C.
2007-01-01
In the last years, many computational models have been developed for tensile fracture in concrete. However, their reliability is related to the correct estimate of the model parameters, not all directly measurable during laboratory tests. Hence, the development of inverse procedures is needed, that
Parameter Estimation for Single Diode Models of Photovoltaic Modules
Energy Technology Data Exchange (ETDEWEB)
Hansen, Clifford [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Photovoltaic and Distributed Systems Integration Dept.
2015-03-01
Many popular models for photovoltaic system performance employ a single diode model to compute the I - V curve for a module or string of modules at given irradiance and temperature conditions. A single diode model requires a number of parameters to be estimated from measured I - V curves. Many available parameter estimation methods use only short circuit, o pen circuit and maximum power points for a single I - V curve at standard test conditions together with temperature coefficients determined separately for individual cells. In contrast, module testing frequently records I - V curves over a wide range of irradi ance and temperature conditions which, when available , should also be used to parameterize the performance model. We present a parameter estimation method that makes use of a fu ll range of available I - V curves. We verify the accuracy of the method by recov ering known parameter values from simulated I - V curves . We validate the method by estimating model parameters for a module using outdoor test data and predicting the outdoor performance of the module.
Parameter estimation for groundwater models under uncertain irrigation data
Demissie, Yonas; Valocchi, Albert J.; Cai, Ximing; Brozovic, Nicholas; Senay, Gabriel; Gebremichael, Mekonnen
2015-01-01
The success of modeling groundwater is strongly influenced by the accuracy of the model parameters that are used to characterize the subsurface system. However, the presence of uncertainty and possibly bias in groundwater model source/sink terms may lead to biased estimates of model parameters and model predictions when the standard regression-based inverse modeling techniques are used. This study first quantifies the levels of bias in groundwater model parameters and predictions due to the presence of errors in irrigation data. Then, a new inverse modeling technique called input uncertainty weighted least-squares (IUWLS) is presented for unbiased estimation of the parameters when pumping and other source/sink data are uncertain. The approach uses the concept of generalized least-squares method with the weight of the objective function depending on the level of pumping uncertainty and iteratively adjusted during the parameter optimization process. We have conducted both analytical and numerical experiments, using irrigation pumping data from the Republican River Basin in Nebraska, to evaluate the performance of ordinary least-squares (OLS) and IUWLS calibration methods under different levels of uncertainty of irrigation data and calibration conditions. The result from the OLS method shows the presence of statistically significant (p model predictions that persist despite calibrating the models to different calibration data and sample sizes. However, by directly accounting for the irrigation pumping uncertainties during the calibration procedures, the proposed IUWLS is able to minimize the bias effectively without adding significant computational burden to the calibration processes.
Synchronous Generator Model Parameter Estimation Based on Noisy Dynamic Waveforms
Berhausen, Sebastian; Paszek, Stefan
2016-01-01
In recent years, there have occurred system failures in many power systems all over the world. They have resulted in a lack of power supply to a large number of recipients. To minimize the risk of occurrence of power failures, it is necessary to perform multivariate investigations, including simulations, of power system operating conditions. To conduct reliable simulations, the current base of parameters of the models of generating units, containing the models of synchronous generators, is necessary. In the paper, there is presented a method for parameter estimation of a synchronous generator nonlinear model based on the analysis of selected transient waveforms caused by introducing a disturbance (in the form of a pseudorandom signal) in the generator voltage regulation channel. The parameter estimation was performed by minimizing the objective function defined as a mean square error for deviations between the measurement waveforms and the waveforms calculated based on the generator mathematical model. A hybrid algorithm was used for the minimization of the objective function. In the paper, there is described a filter system used for filtering the noisy measurement waveforms. The calculation results of the model of a 44 kW synchronous generator installed on a laboratory stand of the Institute of Electrical Engineering and Computer Science of the Silesian University of Technology are also given. The presented estimation method can be successfully applied to parameter estimation of different models of high-power synchronous generators operating in a power system.
Combined Estimation of Hydrogeologic Conceptual Model and Parameter Uncertainty
Energy Technology Data Exchange (ETDEWEB)
Meyer, Philip D.; Ye, Ming; Neuman, Shlomo P.; Cantrell, Kirk J.
2004-03-01
The objective of the research described in this report is the development and application of a methodology for comprehensively assessing the hydrogeologic uncertainties involved in dose assessment, including uncertainties associated with conceptual models, parameters, and scenarios. This report describes and applies a statistical method to quantitatively estimate the combined uncertainty in model predictions arising from conceptual model and parameter uncertainties. The method relies on model averaging to combine the predictions of a set of alternative models. Implementation is driven by the available data. When there is minimal site-specific data the method can be carried out with prior parameter estimates based on generic data and subjective prior model probabilities. For sites with observations of system behavior (and optionally data characterizing model parameters), the method uses model calibration to update the prior parameter estimates and model probabilities based on the correspondence between model predictions and site observations. The set of model alternatives can contain both simplified and complex models, with the requirement that all models be based on the same set of data. The method was applied to the geostatistical modeling of air permeability at a fractured rock site. Seven alternative variogram models of log air permeability were considered to represent data from single-hole pneumatic injection tests in six boreholes at the site. Unbiased maximum likelihood estimates of variogram and drift parameters were obtained for each model. Standard information criteria provided an ambiguous ranking of the models, which would not justify selecting one of them and discarding all others as is commonly done in practice. Instead, some of the models were eliminated based on their negligibly small updated probabilities and the rest were used to project the measured log permeabilities by kriging onto a rock volume containing the six boreholes. These four
A software for parameter estimation in dynamic models
Directory of Open Access Journals (Sweden)
M. Yuceer
2008-12-01
Full Text Available A common problem in dynamic systems is to determine parameters in an equation used to represent experimental data. The goal is to determine the values of model parameters that provide the best fit to measured data, generally based on some type of least squares or maximum likelihood criterion. In the most general case, this requires the solution of a nonlinear and frequently non-convex optimization problem. Some of the available software lack in generality, while others do not provide ease of use. A user-interactive parameter estimation software was needed for identifying kinetic parameters. In this work we developed an integration based optimization approach to provide a solution to such problems. For easy implementation of the technique, a parameter estimation software (PARES has been developed in MATLAB environment. When tested with extensive example problems from literature, the suggested approach is proven to provide good agreement between predicted and observed data within relatively less computing time and iterations.
Parameter Estimation for a Class of Lifetime Models
Directory of Open Access Journals (Sweden)
Xinyang Ji
2014-01-01
Full Text Available Our purpose in this paper is to present a better method of parametric estimation for a bivariate nonlinear regression model, which takes the performance indicator of rubber aging as the dependent variable and time and temperature as the independent variables. We point out that the commonly used two-step method (TSM, which splits the model and estimate parameters separately, has limitation. Instead, we apply the Marquardt’s method (MM to implement parametric estimation directly for the model and compare these two methods of parametric estimation by random simulation. Our results show that MM has better effect of data fitting, more reasonable parametric estimates, and smaller prediction error compared with TSM.
Revised models and genetic parameter estimates for production and ...
African Journals Online (AJOL)
Genetic parameters for production and reproduction traits in the Elsenburg Dormer sheep stud were estimated using records of 11743 lambs born between 1943 and 2002. An animal model with direct and maternal additive, maternal permanent and temporary environmental effects was fitted for traits considered traits of the ...
Modeling extreme events: Sample fraction adaptive choice in parameter estimation
Neves, Manuela; Gomes, Ivette; Figueiredo, Fernanda; Gomes, Dora Prata
2012-09-01
When modeling extreme events there are a few primordial parameters, among which we refer the extreme value index and the extremal index. The extreme value index measures the right tail-weight of the underlying distribution and the extremal index characterizes the degree of local dependence in the extremes of a stationary sequence. Most of the semi-parametric estimators of these parameters show the same type of behaviour: nice asymptotic properties, but a high variance for small values of k, the number of upper order statistics to be used in the estimation, and a high bias for large values of k. This shows a real need for the choice of k. Choosing some well-known estimators of those parameters we revisit the application of a heuristic algorithm for the adaptive choice of k. The procedure is applied to some simulated samples as well as to some real data sets.
Estimation of Parameters in Latent Class Models with Constraints on the Parameters.
Paulson, James A.
This paper reviews the application of the EM Algorithm to marginal maximum likelihood estimation of parameters in the latent class model and extends the algorithm to the case where there are monotone homogeneity constraints on the item parameters. It is shown that the EM algorithm can be used to obtain marginal maximum likelihood estimates of the…
Parameter estimation in nonlinear models for pesticide degradation
International Nuclear Information System (INIS)
Richter, O.; Pestemer, W.; Bunte, D.; Diekkrueger, B.
1991-01-01
A wide class of environmental transfer models is formulated as ordinary or partial differential equations. With the availability of fast computers, the numerical solution of large systems became feasible. The main difficulty in performing a realistic and convincing simulation of the fate of a substance in the biosphere is not the implementation of numerical techniques but rather the incomplete data basis for parameter estimation. Parameter estimation is a synonym for statistical and numerical procedures to derive reasonable numerical values for model parameters from data. The classical method is the familiar linear regression technique which dates back to the 18th century. Because it is easy to handle, linear regression has long been established as a convenient tool for analysing relationships. However, the wide use of linear regression has led to an overemphasis of linear relationships. In nature, most relationships are nonlinear and linearization often gives a poor approximation of reality. Furthermore, pure regression models are not capable to map the dynamics of a process. Therefore, realistic models involve the evolution in time (and space). This leads in a natural way to the formulation of differential equations. To establish the link between data and dynamical models, numerical advanced parameter identification methods have been developed in recent years. This paper demonstrates the application of these techniques to estimation problems in the field of pesticide dynamics. (7 refs., 5 figs., 2 tabs.)
A method for model identification and parameter estimation
International Nuclear Information System (INIS)
Bambach, M; Heinkenschloss, M; Herty, M
2013-01-01
We propose and analyze a new method for the identification of a parameter-dependent model that best describes a given system. This problem arises, for example, in the mathematical modeling of material behavior where several competing constitutive equations are available to describe a given material. In this case, the models are differential equations that arise from the different constitutive equations, and the unknown parameters are coefficients in the constitutive equations. One has to determine the best-suited constitutive equations for a given material and application from experiments. We assume that the true model is one of the N possible parameter-dependent models. To identify the correct model and the corresponding parameters, we can perform experiments, where for each experiment we prescribe an input to the system and observe a part of the system state. Our approach consists of two stages. In the first stage, for each pair of models we determine the experiment, i.e. system input and observation, that best differentiates between the two models, and measure the distance between the two models. Then we conduct N(N − 1) or, depending on the approach taken, N(N − 1)/2 experiments and use the result of the experiments as well as the previously computed model distances to determine the true model. We provide sufficient conditions on the model distances and measurement errors which guarantee that our approach identifies the correct model. Given the model, we identify the corresponding model parameters in the second stage. The problem in the second stage is a standard parameter estimation problem and we use a method suitable for the given application. We illustrate our approach on three examples, including one where the models are elliptic partial differential equations with different parameterized right-hand sides and an example where we identify the constitutive equation in a problem from computational viscoplasticity. (paper)
Model parameters estimation and sensitivity by genetic algorithms
International Nuclear Information System (INIS)
Marseguerra, Marzio; Zio, Enrico; Podofillini, Luca
2003-01-01
In this paper we illustrate the possibility of extracting qualitative information on the importance of the parameters of a model in the course of a Genetic Algorithms (GAs) optimization procedure for the estimation of such parameters. The Genetic Algorithms' search of the optimal solution is performed according to procedures that resemble those of natural selection and genetics: an initial population of alternative solutions evolves within the search space through the four fundamental operations of parent selection, crossover, replacement, and mutation. During the search, the algorithm examines a large amount of solution points which possibly carries relevant information on the underlying model characteristics. A possible utilization of this information amounts to create and update an archive with the set of best solutions found at each generation and then to analyze the evolution of the statistics of the archive along the successive generations. From this analysis one can retrieve information regarding the speed of convergence and stabilization of the different control (decision) variables of the optimization problem. In this work we analyze the evolution strategy followed by a GA in its search for the optimal solution with the aim of extracting information on the importance of the control (decision) variables of the optimization with respect to the sensitivity of the objective function. The study refers to a GA search for optimal estimates of the effective parameters in a lumped nuclear reactor model of literature. The supporting observation is that, as most optimization procedures do, the GA search evolves towards convergence in such a way to stabilize first the most important parameters of the model and later those which influence little the model outputs. In this sense, besides estimating efficiently the parameters values, the optimization approach also allows us to provide a qualitative ranking of their importance in contributing to the model output. The
Propagation channel characterization, parameter estimation, and modeling for wireless communications
Yin, Xuefeng
2016-01-01
Thoroughly covering channel characteristics and parameters, this book provides the knowledge needed to design various wireless systems, such as cellular communication systems, RFID and ad hoc wireless communication systems. It gives a detailed introduction to aspects of channels before presenting the novel estimation and modelling techniques which can be used to achieve accurate models. To systematically guide readers through the topic, the book is organised in three distinct parts. The first part covers the fundamentals of the characterization of propagation channels, including the conventional single-input single-output (SISO) propagation channel characterization as well as its extension to multiple-input multiple-output (MIMO) cases. Part two focuses on channel measurements and channel data post-processing. Wideband channel measurements are introduced, including the equipment, technology and advantages and disadvantages of different data acquisition schemes. The channel parameter estimation methods are ...
Estimating model parameters in nonautonomous chaotic systems using synchronization
International Nuclear Information System (INIS)
Yang, Xiaoli; Xu, Wei; Sun, Zhongkui
2007-01-01
In this Letter, a technique is addressed for estimating unknown model parameters of multivariate, in particular, nonautonomous chaotic systems from time series of state variables. This technique uses an adaptive strategy for tracking unknown parameters in addition to a linear feedback coupling for synchronizing systems, and then some general conditions, by means of the periodic version of the LaSalle invariance principle for differential equations, are analytically derived to ensure precise evaluation of unknown parameters and identical synchronization between the concerned experimental system and its corresponding receiver one. Exemplifies are presented by employing a parametrically excited 4D new oscillator and an additionally excited Ueda oscillator. The results of computer simulations reveal that the technique not only can quickly track the desired parameter values but also can rapidly respond to changes in operating parameters. In addition, the technique can be favorably robust against the effect of noise when the experimental system is corrupted by bounded disturbance and the normalized absolute error of parameter estimation grows almost linearly with the cutoff value of noise strength in simulation
Directory of Open Access Journals (Sweden)
Jonathan R Karr
2015-05-01
Full Text Available Whole-cell models that explicitly represent all cellular components at the molecular level have the potential to predict phenotype from genotype. However, even for simple bacteria, whole-cell models will contain thousands of parameters, many of which are poorly characterized or unknown. New algorithms are needed to estimate these parameters and enable researchers to build increasingly comprehensive models. We organized the Dialogue for Reverse Engineering Assessments and Methods (DREAM 8 Whole-Cell Parameter Estimation Challenge to develop new parameter estimation algorithms for whole-cell models. We asked participants to identify a subset of parameters of a whole-cell model given the model's structure and in silico "experimental" data. Here we describe the challenge, the best performing methods, and new insights into the identifiability of whole-cell models. We also describe several valuable lessons we learned toward improving future challenges. Going forward, we believe that collaborative efforts supported by inexpensive cloud computing have the potential to solve whole-cell model parameter estimation.
Parameter and uncertainty estimation for mechanistic, spatially explicit epidemiological models
Finger, Flavio; Schaefli, Bettina; Bertuzzo, Enrico; Mari, Lorenzo; Rinaldo, Andrea
2014-05-01
Epidemiological models can be a crucially important tool for decision-making during disease outbreaks. The range of possible applications spans from real-time forecasting and allocation of health-care resources to testing alternative intervention mechanisms such as vaccines, antibiotics or the improvement of sanitary conditions. Our spatially explicit, mechanistic models for cholera epidemics have been successfully applied to several epidemics including, the one that struck Haiti in late 2010 and is still ongoing. Calibration and parameter estimation of such models represents a major challenge because of properties unusual in traditional geoscientific domains such as hydrology. Firstly, the epidemiological data available might be subject to high uncertainties due to error-prone diagnosis as well as manual (and possibly incomplete) data collection. Secondly, long-term time-series of epidemiological data are often unavailable. Finally, the spatially explicit character of the models requires the comparison of several time-series of model outputs with their real-world counterparts, which calls for an appropriate weighting scheme. It follows that the usual assumption of a homoscedastic Gaussian error distribution, used in combination with classical calibration techniques based on Markov chain Monte Carlo algorithms, is likely to be violated, whereas the construction of an appropriate formal likelihood function seems close to impossible. Alternative calibration methods, which allow for accurate estimation of total model uncertainty, particularly regarding the envisaged use of the models for decision-making, are thus needed. Here we present the most recent developments regarding methods for parameter and uncertainty estimation to be used with our mechanistic, spatially explicit models for cholera epidemics, based on informal measures of goodness of fit.
Parameter and State Estimator for State Space Models
Directory of Open Access Journals (Sweden)
Ruifeng Ding
2014-01-01
Full Text Available This paper proposes a parameter and state estimator for canonical state space systems from measured input-output data. The key is to solve the system state from the state equation and to substitute it into the output equation, eliminating the state variables, and the resulting equation contains only the system inputs and outputs, and to derive a least squares parameter identification algorithm. Furthermore, the system states are computed from the estimated parameters and the input-output data. Convergence analysis using the martingale convergence theorem indicates that the parameter estimates converge to their true values. Finally, an illustrative example is provided to show that the proposed algorithm is effective.
Parameter and state estimator for state space models.
Ding, Ruifeng; Zhuang, Linfan
2014-01-01
This paper proposes a parameter and state estimator for canonical state space systems from measured input-output data. The key is to solve the system state from the state equation and to substitute it into the output equation, eliminating the state variables, and the resulting equation contains only the system inputs and outputs, and to derive a least squares parameter identification algorithm. Furthermore, the system states are computed from the estimated parameters and the input-output data. Convergence analysis using the martingale convergence theorem indicates that the parameter estimates converge to their true values. Finally, an illustrative example is provided to show that the proposed algorithm is effective.
Dynamic systems models new methods of parameter and state estimation
2016-01-01
This monograph is an exposition of a novel method for solving inverse problems, a method of parameter estimation for time series data collected from simulations of real experiments. These time series might be generated by measuring the dynamics of aircraft in flight, by the function of a hidden Markov model used in bioinformatics or speech recognition or when analyzing the dynamics of asset pricing provided by the nonlinear models of financial mathematics. Dynamic Systems Models demonstrates the use of algorithms based on polynomial approximation which have weaker requirements than already-popular iterative methods. Specifically, they do not require a first approximation of a root vector and they allow non-differentiable elements in the vector functions being approximated. The text covers all the points necessary for the understanding and use of polynomial approximation from the mathematical fundamentals, through algorithm development to the application of the method in, for instance, aeroplane flight dynamic...
Parameter estimation of variable-parameter nonlinear Muskingum model using excel solver
Kang, Ling; Zhou, Liwei
2018-02-01
Abstract . The Muskingum model is an effective flood routing technology in hydrology and water resources Engineering. With the development of optimization technology, more and more variable-parameter Muskingum models were presented to improve effectiveness of the Muskingum model in recent decades. A variable-parameter nonlinear Muskingum model (NVPNLMM) was proposed in this paper. According to the results of two real and frequently-used case studies by various models, the NVPNLMM could obtain better values of evaluation criteria, which are used to describe the superiority of the estimated outflows and compare the accuracies of flood routing using various models, and the optimal estimated outflows by the NVPNLMM were closer to the observed outflows than the ones by other models.
Bayesian parameter estimation for stochastic models of biological cell migration
Dieterich, Peter; Preuss, Roland
2013-08-01
Cell migration plays an essential role under many physiological and patho-physiological conditions. It is of major importance during embryonic development and wound healing. In contrast, it also generates negative effects during inflammation processes, the transmigration of tumors or the formation of metastases. Thus, a reliable quantification and characterization of cell paths could give insight into the dynamics of these processes. Typically stochastic models are applied where parameters are extracted by fitting models to the so-called mean square displacement of the observed cell group. We show that this approach has several disadvantages and problems. Therefore, we propose a simple procedure directly relying on the positions of the cell's trajectory and the covariance matrix of the positions. It is shown that the covariance is identical with the spatial aging correlation function for the supposed linear Gaussian models of Brownian motion with drift and fractional Brownian motion. The technique is applied and illustrated with simulated data showing a reliable parameter estimation from single cell paths.
Bayesian analysis of inflation: Parameter estimation for single field models
International Nuclear Information System (INIS)
Mortonson, Michael J.; Peiris, Hiranya V.; Easther, Richard
2011-01-01
Future astrophysical data sets promise to strengthen constraints on models of inflation, and extracting these constraints requires methods and tools commensurate with the quality of the data. In this paper we describe ModeCode, a new, publicly available code that computes the primordial scalar and tensor power spectra for single-field inflationary models. ModeCode solves the inflationary mode equations numerically, avoiding the slow roll approximation. It is interfaced with CAMB and CosmoMC to compute cosmic microwave background angular power spectra and perform likelihood analysis and parameter estimation. ModeCode is easily extendable to additional models of inflation, and future updates will include Bayesian model comparison. Errors from ModeCode contribute negligibly to the error budget for analyses of data from Planck or other next generation experiments. We constrain representative single-field models (φ n with n=2/3, 1, 2, and 4, natural inflation, and 'hilltop' inflation) using current data, and provide forecasts for Planck. From current data, we obtain weak but nontrivial limits on the post-inflationary physics, which is a significant source of uncertainty in the predictions of inflationary models, while we find that Planck will dramatically improve these constraints. In particular, Planck will link the inflationary dynamics with the post-inflationary growth of the horizon, and thus begin to probe the ''primordial dark ages'' between TeV and grand unified theory scale energies.
Campbell, D A; Chkrebtii, O
2013-12-01
Statistical inference for biochemical models often faces a variety of characteristic challenges. In this paper we examine state and parameter estimation for the JAK-STAT intracellular signalling mechanism, which exemplifies the implementation intricacies common in many biochemical inference problems. We introduce an extension to the Generalized Smoothing approach for estimating delay differential equation models, addressing selection of complexity parameters, choice of the basis system, and appropriate optimization strategies. Motivated by the JAK-STAT system, we further extend the generalized smoothing approach to consider a nonlinear observation process with additional unknown parameters, and highlight how the approach handles unobserved states and unevenly spaced observations. The methodology developed is generally applicable to problems of estimation for differential equation models with delays, unobserved states, nonlinear observation processes, and partially observed histories. Crown Copyright © 2013. Published by Elsevier Inc. All rights reserved.
Weibull Parameters Estimation Based on Physics of Failure Model
DEFF Research Database (Denmark)
Kostandyan, Erik; Sørensen, John Dalsgaard
2012-01-01
Reliability estimation procedures are discussed for the example of fatigue development in solder joints using a physics of failure model. The accumulated damage is estimated based on a physics of failure model, the Rainflow counting algorithm and the Miner’s rule. A threshold model is used...... for degradation modeling and failure criteria determination. The time dependent accumulated damage is assumed linearly proportional to the time dependent degradation level. It is observed that the deterministic accumulated damage at the level of unity closely estimates the characteristic fatigue life of Weibull...
Retrospective forecast of ETAS model with daily parameters estimate
Falcone, Giuseppe; Murru, Maura; Console, Rodolfo; Marzocchi, Warner; Zhuang, Jiancang
2016-04-01
We present a retrospective ETAS (Epidemic Type of Aftershock Sequence) model based on the daily updating of free parameters during the background, the learning and the test phase of a seismic sequence. The idea was born after the 2011 Tohoku-Oki earthquake. The CSEP (Collaboratory for the Study of Earthquake Predictability) Center in Japan provided an appropriate testing benchmark for the five 1-day submitted models. Of all the models, only one was able to successfully predict the number of events that really happened. This result was verified using both the real time and the revised catalogs. The main cause of the failure was in the underestimation of the forecasted events, due to model parameters maintained fixed during the test. Moreover, the absence in the learning catalog of an event similar to the magnitude of the mainshock (M9.0), which drastically changed the seismicity in the area, made the learning parameters not suitable to describe the real seismicity. As an example of this methodological development we show the evolution of the model parameters during the last two strong seismic sequences in Italy: the 2009 L'Aquila and the 2012 Reggio Emilia episodes. The achievement of the model with daily updated parameters is compared with that of same model where the parameters remain fixed during the test time.
Improved parameter estimation for hydrological models using weighted object functions
Stein, A.; Zaadnoordijk, W.J.
1999-01-01
This paper discusses the sensitivity of calibration of hydrological model parameters to different objective functions. Several functions are defined with weights depending upon the hydrological background. These are compared with an objective function based upon kriging. Calibration is applied to
Neural Models: An Option to Estimate Seismic Parameters of Accelerograms
Alcántara, L.; García, S.; Ovando-Shelley, E.; Macías, M. A.
2014-12-01
Seismic instrumentation for recording strong earthquakes, in Mexico, goes back to the 60´s due the activities carried out by the Institute of Engineering at Universidad Nacional Autónoma de México. However, it was after the big earthquake of September 19, 1985 (M=8.1) when the project of seismic instrumentation assumes a great importance. Currently, strong ground motion networks have been installed for monitoring seismic activity mainly along the Mexican subduction zone and in Mexico City. Nevertheless, there are other major regions and cities that can be affected by strong earthquakes and have not yet begun their seismic instrumentation program or this is still in development.Because of described situation some relevant earthquakes (e.g. Huajuapan de León Oct 24, 1980 M=7.1, Tehuacán Jun 15, 1999 M=7 and Puerto Escondido Sep 30, 1999 M= 7.5) have not been registered properly in some cities, like Puebla and Oaxaca, and that were damaged during those earthquakes. Fortunately, the good maintenance work carried out in the seismic network has permitted the recording of an important number of small events in those cities. So in this research we present a methodology based on the use of neural networks to estimate significant duration and in some cases the response spectra for those seismic events. The neural model developed predicts significant duration in terms of magnitude, epicenter distance, focal depth and soil characterization. Additionally, for response spectra we used a vector of spectral accelerations. For training the model we selected a set of accelerogram records obtained from the small events recorded in the strong motion instruments installed in the cities of Puebla and Oaxaca. The final results show that neural networks as a soft computing tool that use a multi-layer feed-forward architecture provide good estimations of the target parameters and they also have a good predictive capacity to estimate strong ground motion duration and response spectra.
Model-Based Material Parameter Estimation for Terahertz Reflection Spectroscopy
Kniffin, Gabriel Paul
Many materials such as drugs and explosives have characteristic spectral signatures in the terahertz (THz) band. These unique signatures imply great promise for spectral detection and classification using THz radiation. While such spectral features are most easily observed in transmission, real-life imaging systems will need to identify materials of interest from reflection measurements, often in non-ideal geometries. One important, yet commonly overlooked source of signal corruption is the etalon effect -- interference phenomena caused by multiple reflections from dielectric layers of packaging and clothing likely to be concealing materials of interest in real-life scenarios. This thesis focuses on the development and implementation of a model-based material parameter estimation technique, primarily for use in reflection spectroscopy, that takes the influence of the etalon effect into account. The technique is adapted from techniques developed for transmission spectroscopy of thin samples and is demonstrated using measured data taken at the Northwest Electromagnetic Research Laboratory (NEAR-Lab) at Portland State University. Further tests are conducted, demonstrating the technique's robustness against measurement noise and common sources of error.
Parameter estimation of electricity spot models from futures prices
Aihara, ShinIchi; Bagchi, Arunabha; Imreizeeq, E.S.N.; Walter, E.
We consider a slight perturbation of the Schwartz-Smith model for the electricity futures prices and the resulting modified spot model. Using the martingale property of the modified price under the risk neutral measure, we derive the arbitrage free model for the spot and futures prices. We estimate
Application of Parameter Estimation for Diffusions and Mixture Models
DEFF Research Database (Denmark)
Nolsøe, Kim
error models. This is obtained by constructing an estimating function through projections of some chosen function of Yti+1 onto functions of previous observations Yti ; : : : ; Yt0 . The process of interest Xti+1 is partially observed through a measurement equation Yti+1 = h(Xti+1)+ noice, where h......(:) is restricted to be a polynomial. Through a simulation study we compare for the CIR process the obtained estimator with an estimator derived from utilizing the extended Kalman filter. The simulation study shows that the two estimation methods perform equally well.......The first part of this thesis proposes a method to determine the preferred number of structures, their proportions and the corresponding geometrical shapes of an m-membered ring molecule. This is obtained by formulating a statistical model for the data and constructing an algorithm which samples...
Comparison of parameter estimation algorithms in hydrological modelling
DEFF Research Database (Denmark)
Blasone, Roberta-Serena; Madsen, Henrik; Rosbjerg, Dan
2006-01-01
for these types of models, although at a more expensive computational cost. The main purpose of this study is to investigate the performance of a global and a local parameter optimization algorithm, respectively, the Shuffled Complex Evolution (SCE) algorithm and the gradient-based Gauss...
Parameter Estimation and Model Selection for Mixtures of Truncated Exponentials
DEFF Research Database (Denmark)
Langseth, Helge; Nielsen, Thomas Dyhre; Rumí, Rafael
2010-01-01
Bayesian networks with mixtures of truncated exponentials (MTEs) support efficient inference algorithms and provide a flexible way of modeling hybrid domains (domains containing both discrete and continuous variables). On the other hand, estimating an MTE from data has turned out to be a difficult...
Comparison of parameter estimation algorithms in hydrological modelling
DEFF Research Database (Denmark)
Blasone, Roberta-Serena; Madsen, Henrik; Rosbjerg, Dan
2006-01-01
Local search methods have been applied successfully in calibration of simple groundwater models, but might fail in locating the optimum for models of increased complexity, due to the more complex shape of the response surface. Global search algorithms have been demonstrated to perform well...... for these types of models, although at a more expensive computational cost. The main purpose of this study is to investigate the performance of a global and a local parameter optimization algorithm, respectively, the Shuffled Complex Evolution (SCE) algorithm and the gradient-based Gauss......-Marquardt-Levenberg algorithm (implemented in the PEST software), when applied to a steady-state and a transient groundwater model. The results show that PEST can have severe problems in locating the global optimum and in being trapped in local regions of attractions. The global SCE procedure is, in general, more effective...
House thermal model parameter estimation method for Model Predictive Control applications
van Leeuwen, Richard Pieter; de Wit, J.B.; Fink, J.; Smit, Gerardus Johannes Maria
In this paper we investigate thermal network models with different model orders applied to various Dutch low-energy house types with high and low interior thermal mass and containing floor heating. Parameter estimations are performed by using data from TRNSYS simulations. The paper discusses results
Clark, D Angus; Nuttall, Amy K; Bowles, Ryan P
2018-01-01
Latent change score models (LCS) are conceptually powerful tools for analyzing longitudinal data (McArdle & Hamagami, 2001). However, applications of these models typically include constraints on key parameters over time. Although practically useful, strict invariance over time in these parameters is unlikely in real data. This study investigates the robustness of LCS when invariance over time is incorrectly imposed on key change-related parameters. Monte Carlo simulation methods were used to explore the impact of misspecification on parameter estimation, predicted trajectories of change, and model fit in the dual change score model, the foundational LCS. When constraints were incorrectly applied, several parameters, most notably the slope (i.e., constant change) factor mean and autoproportion coefficient, were severely and consistently biased, as were regression paths to the slope factor when external predictors of change were included. Standard fit indices indicated that the misspecified models fit well, partly because mean level trajectories over time were accurately captured. Loosening constraint improved the accuracy of parameter estimates, but estimates were more unstable, and models frequently failed to converge. Results suggest that potentially common sources of misspecification in LCS can produce distorted impressions of developmental processes, and that identifying and rectifying the situation is a challenge.
Bayesian estimation of regularization parameters for deformable surface models
Energy Technology Data Exchange (ETDEWEB)
Cunningham, G.S.; Lehovich, A.; Hanson, K.M.
1999-02-20
In this article the authors build on their past attempts to reconstruct a 3D, time-varying bolus of radiotracer from first-pass data obtained by the dynamic SPECT imager, FASTSPECT, built by the University of Arizona. The object imaged is a CardioWest total artificial heart. The bolus is entirely contained in one ventricle and its associated inlet and outlet tubes. The model for the radiotracer distribution at a given time is a closed surface parameterized by 482 vertices that are connected to make 960 triangles, with nonuniform intensity variations of radiotracer allowed inside the surface on a voxel-to-voxel basis. The total curvature of the surface is minimized through the use of a weighted prior in the Bayesian framework, as is the weighted norm of the gradient of the voxellated grid. MAP estimates for the vertices, interior intensity voxels and background count level are produced. The strength of the priors, or hyperparameters, are determined by maximizing the probability of the data given the hyperparameters, called the evidence. The evidence is calculated by first assuming that the posterior is approximately normal in the values of the vertices and voxels, and then by evaluating the integral of the multi-dimensional normal distribution. This integral (which requires evaluating the determinant of a covariance matrix) is computed by applying a recent algorithm from Bai et. al. that calculates the needed determinant efficiently. They demonstrate that the radiotracer is highly inhomogeneous in early time frames, as suspected in earlier reconstruction attempts that assumed a uniform intensity of radiotracer within the closed surface, and that the optimal choice of hyperparameters is substantially different for different time frames.
Parameter estimation of component reliability models in PSA model of Krsko NPP
International Nuclear Information System (INIS)
Jordan Cizelj, R.; Vrbanic, I.
2001-01-01
In the paper, the uncertainty analysis of component reliability models for independent failures is shown. The present approach for parameter estimation of component reliability models in NPP Krsko is presented. Mathematical approaches for different types of uncertainty analyses are introduced and used in accordance with some predisposed requirements. Results of the uncertainty analyses are shown in an example for time-related components. As the most appropriate uncertainty analysis proved the Bayesian estimation with the numerical estimation of a posterior, which can be approximated with some appropriate probability distribution, in this paper with lognormal distribution.(author)
Dynamic ventilation scintigraphy: a comparison of parameter estimation gating models
International Nuclear Information System (INIS)
Hack, S.N.; Paoni, R.A.; Stratton, H.; Valvano, M.; Line, B.R.; Cooper, J.A.
1988-01-01
Two procedures for providing the synchronization of ventilation scintigraphic data to create dynamic displays of the pulmonary cycle are described and compared. These techniques are based on estimating instantaneous lung volume by pneumotachometry and by scintigraphy. Twenty-three patients were studied by these two techniques. The results indicate that the estimation of the times of end-inspiration and end-expiration are equivalent by the two techniques but the morphologies of the two estimated time-volume waveforms are not equivalent. Ventilation cinescintigraphy based on time division gating but not on isovolume division gating can be equivalently generated from list mode acquired data by employing either technique described
Comparing Three Estimation Methods for the Three-Parameter Logistic IRT Model
Lamsal, Sunil
2015-01-01
Different estimation procedures have been developed for the unidimensional three-parameter item response theory (IRT) model. These techniques include the marginal maximum likelihood estimation, the fully Bayesian estimation using Markov chain Monte Carlo simulation techniques, and the Metropolis-Hastings Robbin-Monro estimation. With each…
Model-based parameter estimation using cardiovascular response to orthostatic stress
Heldt, T.; Shim, E. B.; Kamm, R. D.; Mark, R. G.
2001-01-01
This paper presents a cardiovascular model that is capable of simulating the short-term (response to gravitational stress and a gradient-based optimization method that allows for the automated estimation of model parameters from simulated or experimental data. We perform a sensitivity analysis of the transient heart rate response to determine which parameters of the model impact the heart rate dynamics significantly. We subsequently include only those parameters in the estimation routine that impact the transient heart rate dynamics substantially. We apply the estimation algorithm to both simulated and real data and showed that restriction to the 20 most important parameters does not impair our ability to match the data.
Revised models and genetic parameter estimates for production and ...
African Journals Online (AJOL)
The corresponding phenotypic, environmental and ewe permanent environmental correlations were all medium to high and estimated with a fair deal of accuracy according to low standard errors. The genetic relationship between weaning weight of the ewe and her lifetime reproduction (accumulated over four lambing ...
Development of simple kinetic models and parameter estimation for ...
African Journals Online (AJOL)
PANCHIGA
2016-09-28
Sep 28, 2016 ... by methanol. In this study, the unstructured models based on growth kinetic equation, fed-batch mass balance and constancy of cell and protein yields were developed and constructed following the substrates, glycerol and methanol. The growth model on glycerol is mostly published while the growth model ...
Kinetic models and parameters estimation study of biomass and ...
African Journals Online (AJOL)
The growth kinetics and modeling of ethanol production from inulin by Pichia caribbica (KC977491) were studied in a batch system. Unstructured models were proposed using the logistic equation for growth, the Luedeking-Piret equation for ethanol production and modified Leudeking-Piret model for substrate consumption.
Kinetic models and parameters estimation study of biomass and ...
African Journals Online (AJOL)
compaq
2017-01-11
Jan 11, 2017 ... The growth kinetics and modeling of ethanol production from inulin by Pichia caribbica (KC977491) were studied in a batch system. Unstructured models were proposed using the logistic equation for growth, the Luedeking-Piret equation for ethanol production and modified Leudeking-Piret model for.
PARAMETER-ESTIMATION FOR ARMA MODELS WITH INFINITE VARIANCE INNOVATIONS
MIKOSCH, T; GADRICH, T; KLUPPELBERG, C; ADLER, RJ
We consider a standard ARMA process of the form phi(B)X(t) = B(B)Z(t), where the innovations Z(t) belong to the domain of attraction of a stable law, so that neither the Z(t) nor the X(t) have a finite variance. Our aim is to estimate the coefficients of phi and theta. Since maximum likelihood
DEFF Research Database (Denmark)
Ditlevsen, Susanne; Yip, Kay-Pong; Holstein-Rathlou, N.-H.
2005-01-01
A key parameter in the understanding of renal hemodynamics is the gain of the feedback function in the tubuloglomerular feedback mechanism. A dynamic model of autoregulation of renal blood flow and glomerular filtration rate has been extended to include a stochastic differential equations model...... analyzed, and the parameters characterizing the gain and the delay have been estimated. There was good agreement between the estimated values, and the values obtained for the same parameters in independent, previously published experiments....
Mathematical modelling in blood coagulation : simulation and parameter estimation
W.J.H. Stortelder (Walter); P.W. Hemker (Piet); H.C. Hemker
1997-01-01
textabstractThis paper describes the mathematical modelling of a part of the blood coagulation mechanism. The model includes the activation of factor X by a purified enzyme from Russel's Viper Venom (RVV), factor V and prothrombin, and also comprises the inactivation of the products formed. In this
P. Pappas, George; A. Zohdy, Mohamed
2017-01-01
In this paper accurate estimation of parameters, higher order state space prediction methods and Extended Kalman filter (EKF) for modeling shadow power in wireless mobile communications are developed. Path-loss parameter estimation models are compared and evaluated. Shadow power estimation methods in wireless cellular communications are very important for use in power control of mobile device and base station. The methods are validated and compared to existing methods, Kalman Filter (KF) with...
Parameter Estimation and Prediction of a Nonlinear Storage Model: an algebraic approach
Doeswijk, T.G.; Keesman, K.J.
2005-01-01
Generally, parameters that are nonlinear in system models are estimated by nonlinear least-squares optimization algorithms. In this paper, if a nonlinear discrete-time model with a polynomial quotient structure in input, output, and parameters, a method is proposed to re-parameterize the model such
Uncertainty of Modal Parameters Estimated by ARMA Models
DEFF Research Database (Denmark)
Jensen, Jakob Laigaard; Brincker, Rune; Rytter, Anders
by a simulation study of a lightly damped single degree of freedom system. Identification by ARMA models has been chosen as system identification method. It is concluded that both the sampling interval and number of sampled points may play a significant role with respect to the statistical errors. Furthermore...
Uncertainty of Modal Parameters Estimated by ARMA Models
DEFF Research Database (Denmark)
Jensen, Jacob Laigaard; Brincker, Rune; Rytter, Anders
1990-01-01
by simulation study of a lightly damped single degree of freedom system. Identification by ARMA models has been choosen as system identification method. It is concluded that both the sampling interval and number of sampled points may play a significant role with respect to the statistical errors. Furthermore...
Development of simple kinetic models and parameter estimation for ...
African Journals Online (AJOL)
In order to describe and predict the growth and expression of recombinant proteins by using a genetically modified Pichia pastoris, we developed a number of unstructured models based on growth kinetic equation, fed-batch mass balance and the assumptions of constant cell and protein yields. The growth of P. pastoris on ...
Continuum model for masonry: Parameter estimation and validation
Lourenço, P.B.; Rots, J.G.; Blaauwendraad, J.
1998-01-01
A novel yield criterion that includes different strengths along each material axis is presented. The criterion includes two different fracture energies in tension and two different fracture energies in compression. The ability of the model to represent the inelastic behavior of orthotropic materials
Parameter estimation and uncertainty assessment in hydrological modelling
DEFF Research Database (Denmark)
Blasone, Roberta-Serena
En rationel og effektiv vandressourceadministration forudsætter indsigt i og forståelse af de hydrologiske processer samt præcise opgørelser af de tilgængelige vandmængder i både overfladevands- og grundvandsmagasiner. Til det formål er hydrologiske modeller et uomgængeligt værktøj. I de senest 1...
Parameter Estimation for Dynamic Model of the Financial System
Directory of Open Access Journals (Sweden)
Veronika Novotná
2015-01-01
Full Text Available Economy can be considered a large, open system which is influenced by fluctuations, both internal and external. Based on non-linear dynamics theory, the dynamic models of a financial system try to provide a new perspective by explaining the complicated behaviour of the system not as a result of external influences or random behaviour, but as a result of the behaviour and trends of the system’s internal structures. The present article analyses a chaotic financial system from the point of view of determining the time delay of the model variables – the interest rate, investment demand, and price index. The theory is briefly explained in the first chapters of the paper and serves as a basis for formulating the relations. This article aims to determine the appropriate length of time delay variables in a dynamic model of the financial system in order to express the real economic situation and respect the effect of the history of factors under consideration. The determination of the delay length is carried out for the time series representing Euro area. The methodology for the determination of the time delay is illustrated by a concrete example.
Parameter Estimation of Structural Equation Modeling Using Bayesian Approach
Directory of Open Access Journals (Sweden)
Dewi Kurnia Sari
2016-05-01
Full Text Available Leadership is a process of influencing, directing or giving an example of employees in order to achieve the objectives of the organization and is a key element in the effectiveness of the organization. In addition to the style of leadership, the success of an organization or company in achieving its objectives can also be influenced by the commitment of the organization. Where organizational commitment is a commitment created by each individual for the betterment of the organization. The purpose of this research is to obtain a model of leadership style and organizational commitment to job satisfaction and employee performance, and determine the factors that influence job satisfaction and employee performance using SEM with Bayesian approach. This research was conducted at Statistics FNI employees in Malang, with 15 people. The result of this study showed that the measurement model, all significant indicators measure each latent variable. Meanwhile in the structural model, it was concluded there are a significant difference between the variables of Leadership Style and Organizational Commitment toward Job Satisfaction directly as well as a significant difference between Job Satisfaction on Employee Performance. As for the influence of Leadership Style and variable Organizational Commitment on Employee Performance directly declared insignificant.
Online Estimation of Model Parameters of Lithium-Ion Battery Using the Cubature Kalman Filter
Tian, Yong; Yan, Rusheng; Tian, Jindong; Zhou, Shijie; Hu, Chao
2017-11-01
Online estimation of state variables, including state-of-charge (SOC), state-of-energy (SOE) and state-of-health (SOH) is greatly crucial for the operation safety of lithium-ion battery. In order to improve estimation accuracy of these state variables, a precise battery model needs to be established. As the lithium-ion battery is a nonlinear time-varying system, the model parameters significantly vary with many factors, such as ambient temperature, discharge rate and depth of discharge, etc. This paper presents an online estimation method of model parameters for lithium-ion battery based on the cubature Kalman filter. The commonly used first-order resistor-capacitor equivalent circuit model is selected as the battery model, based on which the model parameters are estimated online. Experimental results show that the presented method can accurately track the parameters variation at different scenarios.
Parameter Estimation and Model Validation of Nonlinear Dynamical Networks
Energy Technology Data Exchange (ETDEWEB)
Abarbanel, Henry [Univ. of California, San Diego, CA (United States); Gill, Philip [Univ. of California, San Diego, CA (United States)
2015-03-31
In the performance period of this work under a DOE contract, the co-PIs, Philip Gill and Henry Abarbanel, developed new methods for statistical data assimilation for problems of DOE interest, including geophysical and biological problems. This included numerical optimization algorithms for variational principles, new parallel processing Monte Carlo routines for performing the path integrals of statistical data assimilation. These results have been summarized in the monograph: “Predicting the Future: Completing Models of Observed Complex Systems” by Henry Abarbanel, published by Spring-Verlag in June 2013. Additional results and details have appeared in the peer reviewed literature.
PARAMETER ESTIMATION AND MODEL SELECTION FOR INDOOR ENVIRONMENTS BASED ON SPARSE OBSERVATIONS
Directory of Open Access Journals (Sweden)
Y. Dehbi
2017-09-01
Full Text Available This paper presents a novel method for the parameter estimation and model selection for the reconstruction of indoor environments based on sparse observations. While most approaches for the reconstruction of indoor models rely on dense observations, we predict scenes of the interior with high accuracy in the absence of indoor measurements. We use a model-based top-down approach and incorporate strong but profound prior knowledge. The latter includes probability density functions for model parameters and sparse observations such as room areas and the building footprint. The floorplan model is characterized by linear and bi-linear relations with discrete and continuous parameters. We focus on the stochastic estimation of model parameters based on a topological model derived by combinatorial reasoning in a first step. A Gauss-Markov model is applied for estimation and simulation of the model parameters. Symmetries are represented and exploited during the estimation process. Background knowledge as well as observations are incorporated in a maximum likelihood estimation and model selection is performed with AIC/BIC. The likelihood is also used for the detection and correction of potential errors in the topological model. Estimation results are presented and discussed.
Parameter Estimation and Model Selection for Indoor Environments Based on Sparse Observations
Dehbi, Y.; Loch-Dehbi, S.; Plümer, L.
2017-09-01
This paper presents a novel method for the parameter estimation and model selection for the reconstruction of indoor environments based on sparse observations. While most approaches for the reconstruction of indoor models rely on dense observations, we predict scenes of the interior with high accuracy in the absence of indoor measurements. We use a model-based top-down approach and incorporate strong but profound prior knowledge. The latter includes probability density functions for model parameters and sparse observations such as room areas and the building footprint. The floorplan model is characterized by linear and bi-linear relations with discrete and continuous parameters. We focus on the stochastic estimation of model parameters based on a topological model derived by combinatorial reasoning in a first step. A Gauss-Markov model is applied for estimation and simulation of the model parameters. Symmetries are represented and exploited during the estimation process. Background knowledge as well as observations are incorporated in a maximum likelihood estimation and model selection is performed with AIC/BIC. The likelihood is also used for the detection and correction of potential errors in the topological model. Estimation results are presented and discussed.
Zayane, Chadia
2014-06-01
In this paper, we address a special case of state and parameter estimation, where the system can be put on a cascade form allowing to estimate the state components and the set of unknown parameters separately. Inspired by the nonlinear Balloon hemodynamic model for functional Magnetic Resonance Imaging problem, we propose a hierarchical approach. The system is divided into two subsystems in cascade. The state and input are first estimated from a noisy measured signal using an adaptive observer. The obtained input is then used to estimate the parameters of a linear system using the modulating functions method. Some numerical results are presented to illustrate the efficiency of the proposed method.
Le, Vu H.; Buscaglia, Robert; Chaires, Jonathan B.; Lewis, Edwin A.
2013-01-01
Isothermal Titration Calorimetry, ITC, is a powerful technique that can be used to estimate a complete set of thermodynamic parameters (e.g. Keq (or ΔG), ΔH, ΔS, and n) for a ligand binding interaction described by a thermodynamic model. Thermodynamic models are constructed by combination of equilibrium constant, mass balance, and charge balance equations for the system under study. Commercial ITC instruments are supplied with software that includes a number of simple interaction models, for example one binding site, two binding sites, sequential sites, and n-independent binding sites. More complex models for example, three or more binding sites, one site with multiple binding mechanisms, linked equilibria, or equilibria involving macromolecular conformational selection through ligand binding need to be developed on a case by case basis by the ITC user. In this paper we provide an algorithm (and a link to our MATLAB program) for the non-linear regression analysis of a multiple binding site model with up to four overlapping binding equilibria. Error analysis demonstrates that fitting ITC data for multiple parameters (e.g. up to nine parameters in the three binding site model) yields thermodynamic parameters with acceptable accuracy. PMID:23262283
The Limitations of Model-Based Experimental Design and Parameter Estimation in Sloppy Systems.
Directory of Open Access Journals (Sweden)
Andrew White
2016-12-01
Full Text Available We explore the relationship among experimental design, parameter estimation, and systematic error in sloppy models. We show that the approximate nature of mathematical models poses challenges for experimental design in sloppy models. In many models of complex biological processes it is unknown what are the relevant physical mechanisms that must be included to explain system behaviors. As a consequence, models are often overly complex, with many practically unidentifiable parameters. Furthermore, which mechanisms are relevant/irrelevant vary among experiments. By selecting complementary experiments, experimental design may inadvertently make details that were ommitted from the model become relevant. When this occurs, the model will have a large systematic error and fail to give a good fit to the data. We use a simple hyper-model of model error to quantify a model's discrepancy and apply it to two models of complex biological processes (EGFR signaling and DNA repair with optimally selected experiments. We find that although parameters may be accurately estimated, the discrepancy in the model renders it less predictive than it was in the sloppy regime where systematic error is small. We introduce the concept of a sloppy system-a sequence of models of increasing complexity that become sloppy in the limit of microscopic accuracy. We explore the limits of accurate parameter estimation in sloppy systems and argue that identifying underlying mechanisms controlling system behavior is better approached by considering a hierarchy of models of varying detail rather than focusing on parameter estimation in a single model.
Estimation of the Malthusian parameter in an stochastic epidemic model using martingale methods.
Lindenstrand, David; Svensson, Åke
2013-12-01
Data, on the number of infected, gathered from a large epidemic outbreak can be used to estimate parameters related to the strength and speed of the spread. The Malthusian parameter, which determines the initial growth rate of the epidemic is often of crucial interest. Using a simple epidemic SEIR model with known generation time distribution, we define and analyze an estimate, based on martingale methods. We derive asymptotic properties of the estimate and compare them to the results from simulations of the epidemic. The estimate uses all the information contained in the epidemic curve, in contrast to estimates which only use data from the start of the outbreak.
Bayesian parameter estimation in dynamic population model via particle Markov chain Monte Carlo
Directory of Open Access Journals (Sweden)
Meng Gao
2012-12-01
Full Text Available In nature, population dynamics are subject to multiple sources of stochasticity. State-space models (SSMs provide an ideal framework for incorporating both environmental noises and measurement errors into dynamic population models. In this paper, we present a recently developed method, Particle Markov Chain Monte Carlo (Particle MCMC, for parameter estimation in nonlinear SSMs. We use one effective algorithm of Particle MCMC, Particle Gibbs sampling algorithm, to estimate the parameters of a state-space model of population dynamics. The posterior distributions of parameters are derived given the conjugate prior distribution. Numerical simulations showed that the model parameters can be accurately estimated, no matter the deterministic model is stable, periodic or chaotic. Moreover, we fit the model to 16 representative time series from Global Population Dynamics Database (GPDD. It is verified that the results of parameter and state estimation using Particle Gibbs sampling algorithm are satisfactory for a majority of time series. For other time series, the quality of parameter estimation can also be improved, if prior knowledge is constrained. In conclusion, Particle Gibbs sampling algorithm provides a new Bayesian parameter inference method for studying population dynamics.
A new method to estimate parameters of linear compartmental models using artificial neural networks
International Nuclear Information System (INIS)
Gambhir, Sanjiv S.; Keppenne, Christian L.; Phelps, Michael E.; Banerjee, Pranab K.
1998-01-01
At present, the preferred tool for parameter estimation in compartmental analysis is an iterative procedure; weighted nonlinear regression. For a large number of applications, observed data can be fitted to sums of exponentials whose parameters are directly related to the rate constants/coefficients of the compartmental models. Since weighted nonlinear regression often has to be repeated for many different data sets, the process of fitting data from compartmental systems can be very time consuming. Furthermore the minimization routine often converges to a local (as opposed to global) minimum. In this paper, we examine the possibility of using artificial neural networks instead of weighted nonlinear regression in order to estimate model parameters. We train simple feed-forward neural networks to produce as outputs the parameter values of a given model when kinetic data are fed to the networks' input layer. The artificial neural networks produce unbiased estimates and are orders of magnitude faster than regression algorithms. At noise levels typical of many real applications, the neural networks are found to produce lower variance estimates than weighted nonlinear regression in the estimation of parameters from mono- and biexponential models. These results are primarily due to the inability of weighted nonlinear regression to converge. These results establish that artificial neural networks are powerful tools for estimating parameters for simple compartmental models. (author)
Luo, Rutao; Piovoso, Michael J.; Martinez-Picado, Javier; Zurakowski, Ryan
2012-01-01
Mathematical models based on ordinary differential equations (ODE) have had significant impact on understanding HIV disease dynamics and optimizing patient treatment. A model that characterizes the essential disease dynamics can be used for prediction only if the model parameters are identifiable from clinical data. Most previous parameter identification studies for HIV have used sparsely sampled data from the decay phase following the introduction of therapy. In this paper, model parameters are identified from frequently sampled viral-load data taken from ten patients enrolled in the previously published AutoVac HAART interruption study, providing between 69 and 114 viral load measurements from 3–5 phases of viral decay and rebound for each patient. This dataset is considerably larger than those used in previously published parameter estimation studies. Furthermore, the measurements come from two separate experimental conditions, which allows for the direct estimation of drug efficacy and reservoir contribution rates, two parameters that cannot be identified from decay-phase data alone. A Markov-Chain Monte-Carlo method is used to estimate the model parameter values, with initial estimates obtained using nonlinear least-squares methods. The posterior distributions of the parameter estimates are reported and compared for all patients. PMID:22815727
A framework for scalable parameter estimation of gene circuit models using structural information
Kuwahara, Hiroyuki
2013-06-21
Motivation: Systematic and scalable parameter estimation is a key to construct complex gene regulatory models and to ultimately facilitate an integrative systems biology approach to quantitatively understand the molecular mechanisms underpinning gene regulation. Results: Here, we report a novel framework for efficient and scalable parameter estimation that focuses specifically on modeling of gene circuits. Exploiting the structure commonly found in gene circuit models, this framework decomposes a system of coupled rate equations into individual ones and efficiently integrates them separately to reconstruct the mean time evolution of the gene products. The accuracy of the parameter estimates is refined by iteratively increasing the accuracy of numerical integration using the model structure. As a case study, we applied our framework to four gene circuit models with complex dynamics based on three synthetic datasets and one time series microarray data set. We compared our framework to three state-of-the-art parameter estimation methods and found that our approach consistently generated higher quality parameter solutions efficiently. Although many general-purpose parameter estimation methods have been applied for modeling of gene circuits, our results suggest that the use of more tailored approaches to use domain-specific information may be a key to reverse engineering of complex biological systems. The Author 2013.
Parameter estimation and sensitivity analysis for a mathematical model with time delays of leukemia
Cândea, Doina; Halanay, Andrei; Rǎdulescu, Rodica; Tǎlmaci, Rodica
2017-01-01
We consider a system of nonlinear delay differential equations that describes the interaction between three competing cell populations: healthy, leukemic and anti-leukemia T cells involved in Chronic Myeloid Leukemia (CML) under treatment with Imatinib. The aim of this work is to establish which model parameters are the most important in the success or failure of leukemia remission under treatment using a sensitivity analysis of the model parameters. For the most significant parameters of the model which affect the evolution of CML disease during Imatinib treatment we try to estimate the realistic values using some experimental data. For these parameters, steady states are calculated and their stability is analyzed and biologically interpreted.
A robust methodology for kinetic model parameter estimation for biocatalytic reactions
DEFF Research Database (Denmark)
Al-Haque, Naweed; Andrade Santacoloma, Paloma de Gracia; Lima Afonso Neto, Watson
2012-01-01
Effective estimation of parameters in biocatalytic reaction kinetic expressions are very important when building process models to enable evaluation of process technology options and alternative biocatalysts. The kinetic models used to describe enzyme-catalyzed reactions generally include several...... parameters, which are strongly correlated with each other. State-of-the-art methodologies such as nonlinear regression (using progress curves) or graphical analysis (using initial rate data, for example, the Lineweaver-Burke plot, Hanes plot or Dixon plot) often incorporate errors in the estimates and rarely...... lead to globally optimized parameter values. In this article, a robust methodology to estimate parameters for biocatalytic reaction kinetic expressions is proposed. The methodology determines the parameters in a systematic manner by exploiting the best features of several of the current approaches...
Bayesian parameter estimation and interpretation for an intermediate model of tree-ring width
Directory of Open Access Journals (Sweden)
S. E. Tolwinski-Ward
2013-07-01
Full Text Available We present a Bayesian model for estimating the parameters of the VS-Lite forward model of tree-ring width for a particular chronology and its local climatology. The scheme also provides information about the uncertainty of the parameter estimates, as well as the model error in representing the observed proxy time series. By inferring VS-Lite's parameters independently for synthetically generated ring-width series at several hundred sites across the United States, we show that the algorithm is skillful. We also infer optimal parameter values for modeling observed ring-width data at the same network of sites. The estimated parameter values covary in physical space, and their locations in multidimensional parameter space provide insight into the dominant climatic controls on modeled tree-ring growth at each site as well as the stability of those controls. The estimation procedure is useful for forward and inverse modeling studies using VS-Lite to quantify the full range of model uncertainty stemming from its parameterization.
The performance of simulated annealing in parameter estimation for vapor-liquid equilibrium modeling
Directory of Open Access Journals (Sweden)
A. Bonilla-Petriciolet
2007-03-01
Full Text Available In this paper we report the application and evaluation of the simulated annealing (SA optimization method in parameter estimation for vapor-liquid equilibrium (VLE modeling. We tested this optimization method using the classical least squares and error-in-variable approaches. The reliability and efficiency of the data-fitting procedure are also considered using different values for algorithm parameters of the SA method. Our results indicate that this method, when properly implemented, is a robust procedure for nonlinear parameter estimation in thermodynamic models. However, in difficult problems it still can converge to local optimums of the objective function.
Physical parameter estimation in spatial heat transport models with an application to food storage
van Mourik, S.; Vries, Dirk; Ploegaert, Johan P. M.; Zwart, Heiko J.; Keesman, Karel J.
Parameter estimation plays an important role in physical modelling, but can be problematic due to the complexity of spatiotemporal models that are used for analysis, control and design in industry. In this paper we aim to circumvent these problems by using a methodology that approximates a model, or
Physical parameter estimation in spatial heat transport models with an application to food storage
Mourik, van S.; Vries, D.; Ploegaert, J.P.M.; Zwart, H.; Keesman, K.J.
2012-01-01
Parameter estimation plays an important role in physical modelling, but can be problematic due to the complexity of spatiotemporal models that are used for analysis, control and design in industry. In this paper we aim to circumvent these problems by using a methodology that approximates a model, or
Parameter estimation and analysis of an automotive heavy-duty SCR catalyst model
DEFF Research Database (Denmark)
Åberg, Andreas; Widd, Anders; Abildskov, Jens
2017-01-01
A single channel model for a heavy-duty SCR catalyst was derived based on first principles. The model considered heat and mass transfer between the channel gas phase and the wash coat phase. The parameters of the kinetic model were estimated using bench-scale monolith isothermal data. Validation ...
See, J. J.; Jamaian, S. S.; Salleh, R. M.; Nor, M. E.; Aman, F.
2018-04-01
This research aims to estimate the parameters of Monod model of microalgae Botryococcus Braunii sp growth by the Least-Squares method. Monod equation is a non-linear equation which can be transformed into a linear equation form and it is solved by implementing the Least-Squares linear regression method. Meanwhile, Gauss-Newton method is an alternative method to solve the non-linear Least-Squares problem with the aim to obtain the parameters value of Monod model by minimizing the sum of square error ( SSE). As the result, the parameters of the Monod model for microalgae Botryococcus Braunii sp can be estimated by the Least-Squares method. However, the estimated parameters value obtained by the non-linear Least-Squares method are more accurate compared to the linear Least-Squares method since the SSE of the non-linear Least-Squares method is less than the linear Least-Squares method.
Estimating Parameter Uncertainty in Binding-Energy Models by the Frequency-Domain Bootstrap
Bertsch, G. F.; Bingham, Derek
2017-12-01
We propose using the frequency-domain bootstrap (FDB) to estimate errors of modeling parameters when the modeling error is itself a major source of uncertainty. Unlike the usual bootstrap or the simple χ2 analysis, the FDB can take into account correlations between errors. It is also very fast compared to the Gaussian process Bayesian estimate as often implemented for computer model calibration. The method is illustrated with a simple example, the liquid drop model of nuclear binding energies. We find that the FDB gives a more conservative estimate of the uncertainty in liquid drop parameters than the χ2 method, and is in fair accord with more empirical estimates. For the nuclear physics application, there are no apparent obstacles to apply the method to the more accurate and detailed models based on density-functional theory.
DEFF Research Database (Denmark)
Ottosen, T. B.; Ketzel, Matthias; Skov, H.
2016-01-01
Mathematical models are increasingly used in environmental science thus increasing the importance of uncertainty and sensitivity analyses. In the present study, an iterative parameter estimation and identifiability analysis methodology is applied to an atmospheric model – the Operational Street...... of the identifiability analysis, showed that some model parameters were significantly more sensitive than others. The application of the determined optimal parameter values was shown to successfully equilibrate the model biases among the individual streets and species. It was as well shown that the frequentist approach...
Model calibration and parameter estimation for environmental and water resource systems
Sun, Ne-Zheng
2015-01-01
This three-part book provides a comprehensive and systematic introduction to the development of useful models for complex systems. Part 1 covers the classical inverse problem for parameter estimation in both deterministic and statistical frameworks, Part 2 is dedicated to system identification, hyperparameter estimation, and model dimension reduction, and Part 3 considers how to collect data and construct reliable models for prediction and decision-making. For the first time, topics such as multiscale inversion, stochastic field parameterization, level set method, machine learning, global sensitivity analysis, data assimilation, model uncertainty quantification, robust design, and goal-oriented modeling, are systematically described and summarized in a single book from the perspective of model inversion, and elucidated with numerical examples from environmental and water resources modeling. Readers of this book will not only learn basic concepts and methods for simple parameter estimation, but also get famili...
Directory of Open Access Journals (Sweden)
Teresa eLehnert
2015-06-01
Full Text Available Opportunistic fungal pathogens can cause bloodstream infection and severe sepsis upon entering the blood stream of the host. The early immune response in human blood comprises the elimination of pathogens by antimicrobial peptides and innate immune cells, such as neutrophils or monocytes. Mathematical modeling is a predictive method to examine these complex processes and to quantify the dynamics of pathogen-host interactions. Since model parameters are often not directly accessible from experiment, their estimation is required by calibrating model predictions with experimental data. Depending on the complexity of the mathematical model, parameter estimation can be associated with excessively high computational costs in terms of run time and memory. We apply a strategy for reliable parameter estimation where different modeling approaches with increasing complexity are used that build on one another. This bottom-up modeling approach is applied to an experimental human whole-blood infection assay for Candida albicans. Aiming for the quantification of the relative impact of different routes of the immune response against this human-pathogenic fungus, we start from a non-spatial state-based model (SBM, because this level of model complexity allows estimating textit{a priori} unknown transition rates between various system states by the global optimization method simulated annealing. Building on the non-spatial SBM, an agent-based model (ABM is implemented that incorporates the migration of interacting cells in three-dimensional space. The ABM takes advantage of estimated parameters from the non-spatial SBM, leading to a decreased dimensionality of the parameter space. This space can be scanned using a local optimization approach, i.e. least-squares error estimation based on an adaptive regular grid search, to predict cell migration parameters that are not accessible in experiment.
Energy Technology Data Exchange (ETDEWEB)
Mukhopadhyay, S.; Tsang, Y.; Finsterle, S.
2009-01-15
A simple conceptual model has been recently developed for analyzing pressure and temperature data from flowing fluid temperature logging (FFTL) in unsaturated fractured rock. Using this conceptual model, we developed an analytical solution for FFTL pressure response, and a semianalytical solution for FFTL temperature response. We also proposed a method for estimating fracture permeability from FFTL temperature data. The conceptual model was based on some simplifying assumptions, particularly that a single-phase airflow model was used. In this paper, we develop a more comprehensive numerical model of multiphase flow and heat transfer associated with FFTL. Using this numerical model, we perform a number of forward simulations to determine the parameters that have the strongest influence on the pressure and temperature response from FFTL. We then use the iTOUGH2 optimization code to estimate these most sensitive parameters through inverse modeling and to quantify the uncertainties associated with these estimated parameters. We conclude that FFTL can be utilized to determine permeability, porosity, and thermal conductivity of the fracture rock. Two other parameters, which are not properties of the fractured rock, have strong influence on FFTL response. These are pressure and temperature in the borehole that were at equilibrium with the fractured rock formation at the beginning of FFTL. We illustrate how these parameters can also be estimated from FFTL data.
Plumb, John M.; Moffitt, Christine M.
2015-01-01
Researchers have cautioned against the borrowing of consumption and growth parameters from other species and life stages in bioenergetics growth models. In particular, the function that dictates temperature dependence in maximum consumption (Cmax) within the Wisconsin bioenergetics model for Chinook Salmon Oncorhynchus tshawytscha produces estimates that are lower than those measured in published laboratory feeding trials. We used published and unpublished data from laboratory feeding trials with subyearling Chinook Salmon from three stocks (Snake, Nechako, and Big Qualicum rivers) to estimate and adjust the model parameters for temperature dependence in Cmax. The data included growth measures in fish ranging from 1.5 to 7.2 g that were held at temperatures from 14°C to 26°C. Parameters for temperature dependence in Cmax were estimated based on relative differences in food consumption, and bootstrapping techniques were then used to estimate the error about the parameters. We found that at temperatures between 17°C and 25°C, the current parameter values did not match the observed data, indicating that Cmax should be shifted by about 4°C relative to the current implementation under the bioenergetics model. We conclude that the adjusted parameters for Cmax should produce more accurate predictions from the bioenergetics model for subyearling Chinook Salmon.
Directory of Open Access Journals (Sweden)
Shengyu eJiang
2016-02-01
Full Text Available Likert types of rating scales in which a respondent chooses a response from an ordered set of response options are used to measure a wide variety of psychological, educational, and medical outcome variables. The most appropriate item response theory model for analyzing and scoring these instruments when they provide scores on multiple scales is the multidimensional graded response model (MGRM. A simulation study was conducted to investigate the variables that might affect item parameter recovery for the MGRM. Data were generated based on different sample sizes, test lengths, and scale intercorrelations. Parameter estimates were obtained through the flexiMIRT software. The quality of parameter recovery was assessed by the correlation between true and estimated parameters as well as bias and root- mean-square-error. Results indicated that for the vast majority of cases studied a sample size of N = 500 provided accurate parameter estimates, except for tests with 240 items when 1,000 examinees were necessary to obtain accurate parameter estimates. Increasing sample size beyond N = 1,000 did not increase the accuracy of MGRM parameter estimates.
Estimating Parameters for the PVsyst Version 6 Photovoltaic Module Performance Model
Energy Technology Data Exchange (ETDEWEB)
Hansen, Clifford [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)
2015-10-01
We present an algorithm to determine parameters for the photovoltaic module perf ormance model encoded in the software package PVsyst(TM) version 6. Our method operates on current - voltage (I - V) measured over a range of irradiance and temperature conditions. We describe the method and illustrate its steps using data for a 36 cell crystalli ne silicon module. We qualitatively compare our method with one other technique for estimating parameters for the PVsyst(TM) version 6 model .
A Consistent Methodology Based Parameter Estimation for a Lactic Acid Bacteria Fermentation Model
DEFF Research Database (Denmark)
Spann, Robert; Roca, Christophe; Kold, David
2017-01-01
Lactic acid bacteria are used in many industrial applications, e.g. as starter cultures in the dairy industry or as probiotics, and research on their cell production is highly required. A first principles kinetic model was developed to describe and understand the biological, physical, and chemical...... mechanisms in a lactic acid bacteria fermentation. We present here a consistent approach for a methodology based parameter estimation for a lactic acid fermentation. In the beginning, just an initial knowledge based guess of parameters was available and an initial parameter estimation of the complete set...
Estimating DSGE model parameters in a small open economy: Do real-time data matter?
Directory of Open Access Journals (Sweden)
Capek Jan
2015-03-01
Full Text Available This paper investigates the differences between parameters estimated using real-time and those estimated with revised data. The models used are New Keynesian DSGE models of the Czech, Polish, Hungarian, Swiss, and Swedish small open economies in interaction with the euro area. The paper also offers an analysis of data revisions of GDP growth and inflation and trend revisions of interest rates.
Optimal Selection of the Sampling Interval for Estimation of Modal Parameters by an ARMA- Model
DEFF Research Database (Denmark)
Kirkegaard, Poul Henning
1993-01-01
Optimal selection of the sampling interval for estimation of the modal parameters by an ARMA-model for a white noise loaded structure modelled as a single degree of- freedom linear mechanical system is considered. An analytical solution for an optimal uniform sampling interval, which is optimal...
Utilising temperature differences as constraints for estimating parameters in a simple climate model
International Nuclear Information System (INIS)
Bodman, Roger W; Karoly, David J; Enting, Ian G
2010-01-01
Simple climate models can be used to estimate the global temperature response to increasing greenhouse gases. Changes in the energy balance of the global climate system are represented by equations that necessitate the use of uncertain parameters. The values of these parameters can be estimated from historical observations, model testing, and tuning to more complex models. Efforts have been made at estimating the possible ranges for these parameters. This study continues this process, but demonstrates two new constraints. Previous studies have shown that land-ocean temperature differences are only weakly correlated with global mean temperature for natural internal climate variations. Hence, these temperature differences provide additional information that can be used to help constrain model parameters. In addition, an ocean heat content ratio can also provide a further constraint. A pulse response technique was used to identify relative parameter sensitivity which confirmed the importance of climate sensitivity and ocean vertical diffusivity, but the land-ocean warming ratio and the land-ocean heat exchange coefficient were also found to be important. Experiments demonstrate the utility of the land-ocean temperature difference and ocean heat content ratio for setting parameter values. This work is based on investigations with MAGICC (Model for the Assessment of Greenhouse-gas Induced Climate Change) as the simple climate model.
Estimating Parameters in Physical Models through Bayesian Inversion: A Complete Example
Allmaras, Moritz
2013-02-07
All mathematical models of real-world phenomena contain parameters that need to be estimated from measurements, either for realistic predictions or simply to understand the characteristics of the model. Bayesian statistics provides a framework for parameter estimation in which uncertainties about models and measurements are translated into uncertainties in estimates of parameters. This paper provides a simple, step-by-step example-starting from a physical experiment and going through all of the mathematics-to explain the use of Bayesian techniques for estimating the coefficients of gravity and air friction in the equations describing a falling body. In the experiment we dropped an object from a known height and recorded the free fall using a video camera. The video recording was analyzed frame by frame to obtain the distance the body had fallen as a function of time, including measures of uncertainty in our data that we describe as probability densities. We explain the decisions behind the various choices of probability distributions and relate them to observed phenomena. Our measured data are then combined with a mathematical model of a falling body to obtain probability densities on the space of parameters we seek to estimate. We interpret these results and discuss sources of errors in our estimation procedure. © 2013 Society for Industrial and Applied Mathematics.
An Improved Swarm Optimization for Parameter Estimation and Biological Model Selection
Abdullah, Afnizanfaizal; Deris, Safaai; Mohamad, Mohd Saberi; Anwar, Sohail
2013-01-01
One of the key aspects of computational systems biology is the investigation on the dynamic biological processes within cells. Computational models are often required to elucidate the mechanisms and principles driving the processes because of the nonlinearity and complexity. The models usually incorporate a set of parameters that signify the physical properties of the actual biological systems. In most cases, these parameters are estimated by fitting the model outputs with the corresponding experimental data. However, this is a challenging task because the available experimental data are frequently noisy and incomplete. In this paper, a new hybrid optimization method is proposed to estimate these parameters from the noisy and incomplete experimental data. The proposed method, called Swarm-based Chemical Reaction Optimization, integrates the evolutionary searching strategy employed by the Chemical Reaction Optimization, into the neighbouring searching strategy of the Firefly Algorithm method. The effectiveness of the method was evaluated using a simulated nonlinear model and two biological models: synthetic transcriptional oscillators, and extracellular protease production models. The results showed that the accuracy and computational speed of the proposed method were better than the existing Differential Evolution, Firefly Algorithm and Chemical Reaction Optimization methods. The reliability of the estimated parameters was statistically validated, which suggests that the model outputs produced by these parameters were valid even when noisy and incomplete experimental data were used. Additionally, Akaike Information Criterion was employed to evaluate the model selection, which highlighted the capability of the proposed method in choosing a plausible model based on the experimental data. In conclusion, this paper presents the effectiveness of the proposed method for parameter estimation and model selection problems using noisy and incomplete experimental data. This
An improved swarm optimization for parameter estimation and biological model selection.
Directory of Open Access Journals (Sweden)
Afnizanfaizal Abdullah
Full Text Available One of the key aspects of computational systems biology is the investigation on the dynamic biological processes within cells. Computational models are often required to elucidate the mechanisms and principles driving the processes because of the nonlinearity and complexity. The models usually incorporate a set of parameters that signify the physical properties of the actual biological systems. In most cases, these parameters are estimated by fitting the model outputs with the corresponding experimental data. However, this is a challenging task because the available experimental data are frequently noisy and incomplete. In this paper, a new hybrid optimization method is proposed to estimate these parameters from the noisy and incomplete experimental data. The proposed method, called Swarm-based Chemical Reaction Optimization, integrates the evolutionary searching strategy employed by the Chemical Reaction Optimization, into the neighbouring searching strategy of the Firefly Algorithm method. The effectiveness of the method was evaluated using a simulated nonlinear model and two biological models: synthetic transcriptional oscillators, and extracellular protease production models. The results showed that the accuracy and computational speed of the proposed method were better than the existing Differential Evolution, Firefly Algorithm and Chemical Reaction Optimization methods. The reliability of the estimated parameters was statistically validated, which suggests that the model outputs produced by these parameters were valid even when noisy and incomplete experimental data were used. Additionally, Akaike Information Criterion was employed to evaluate the model selection, which highlighted the capability of the proposed method in choosing a plausible model based on the experimental data. In conclusion, this paper presents the effectiveness of the proposed method for parameter estimation and model selection problems using noisy and incomplete
Estimator of a non-Gaussian parameter in multiplicative log-normal models
Kiyono, Ken; Struzik, Zbigniew R.; Yamamoto, Yoshiharu
2007-10-01
We study non-Gaussian probability density functions (PDF’s) of multiplicative log-normal models in which the multiplication of Gaussian and log-normally distributed random variables is considered. To describe the PDF of the velocity difference between two points in fully developed turbulent flows, the non-Gaussian PDF model was originally introduced by Castaing [Physica D 46, 177 (1990)]. In practical applications, an experimental PDF is approximated with Castaing’s model by tuning a single non-Gaussian parameter, which corresponds to the logarithmic variance of the log-normally distributed variable in the model. In this paper, we propose an estimator of the non-Gaussian parameter based on the q th order absolute moments. To test the estimator, we introduce two types of stochastic processes within the framework of the multiplicative log-normal model. One is a sequence of independent and identically distributed random variables. The other is a log-normal cascade-type multiplicative process. By analyzing the numerically generated time series, we demonstrate that the estimator can reliably determine the theoretical value of the non-Gaussian parameter. Scale dependence of the non-Gaussian parameter in multiplicative log-normal models is also studied, both analytically and numerically. As an application of the estimator, we demonstrate that non-Gaussian PDF’s observed in the S&P500 index fluctuations are well described by the multiplicative log-normal model.
Lee, Eunyoung; Cumberbatch, Jewel; Wang, Meng; Zhang, Qiong
2017-03-01
Anaerobic co-digestion has a potential to improve biogas production, but limited kinetic information is available for co-digestion. This study introduced regression-based models to estimate the kinetic parameters for the co-digestion of microalgae and Waste Activated Sludge (WAS). The models were developed using the ratios of co-substrates and the kinetic parameters for the single substrate as indicators. The models were applied to the modified first-order kinetics and Monod model to determine the rate of hydrolysis and methanogenesis for the co-digestion. The results showed that the model using a hyperbola function was better for the estimation of the first-order kinetic coefficients, while the model using inverse tangent function closely estimated the Monod kinetic parameters. The models can be used for estimating kinetic parameters for not only microalgae-WAS co-digestion but also other substrates' co-digestion such as microalgae-swine manure and WAS-aquatic plants. Copyright © 2016 Elsevier Ltd. All rights reserved.
Kahl, Gunnar M; Sidorenko, Yury; Gottesbüren, Bernhard
2015-04-01
As an option for higher-tier leaching assessment of pesticides in Europe according to FOCUS, pesticide properties can be estimated from lysimeter studies by inversely fitting parameter values (substance half-life DT50 and sorption coefficient to organic matter kom ). The aim of the study was to identify adequate methods for inverse modelling. Model parameters for the PEARL (Pesticide Emission Assessment at Regional and Local scales) model were estimated with different inverse optimisation algorithms - the Levenberg-Marquardt (LM), PD_MS2 (PEST Driver Multiple Starting Points 2) and SCEM (Shuffled Complex Evolution Metropolis) algorithms. Optimisation of crop factors and hydraulic properties was found to be an ill-posed problem, and all algorithms failed to identify reliable global minima for the hydrological parameters. All algorithms performed equally well in estimating pesticide sorption and degradation parameters. SCEM was in most cases the only algorithm that reliably calculated uncertainties. The most reliable approach for finding the best parameter set in the stepwise approach of optimising evapotranspiration, soil hydrology and pesticide parameters was to run only SCEM or a combined approach with more than one algorithm using the best fit of each step for further processing. PD_MS2 was well suited to a quick parameter search. The linear parameter uncertainty intervals estimated by LM and PD_MS2 were usually larger than by the non-linear method used by SCEM. With the suggested methods, parameter optimisation, together with reliable estimation of uncertainties, is possible also for relatively complex systems. © 2014 Society of Chemical Industry.
Schoups, Gerrit; Vrugt, Jasper A.
2010-05-01
Estimation of parameter and predictive uncertainty of hydrologic models usually relies on the assumption of additive residual errors that are independent and identically distributed according to a normal distribution with a mean of zero and a constant variance. Here, we investigate to what extent estimates of parameter and predictive uncertainty are affected when these assumptions are relaxed. Parameter and predictive uncertainty are estimated by Monte Carlo Markov Chain sampling from a generalized likelihood function that accounts for correlation, heteroscedasticity, and non-normality of residual errors. Application to rainfall-runoff modeling using daily data from a humid basin reveals that: (i) residual errors are much better described by a heteroscedastic, first-order auto-correlated error model with a Laplacian density characterized by heavier tails than a Gaussian density, and (ii) proper representation of the statistical distribution of residual errors yields tighter predictive uncertainty bands and more physically realistic parameter estimates that are less sensitive to the particular time period used for inference. The latter is especially useful for regionalization and extrapolation of parameter values to ungauged basins. Application to daily rainfall-runoff data from a semi-arid basin shows that allowing skew in the error distribution yields improved estimates of predictive uncertainty when flows are close to zero.
Meyer, P. D.; Yabusaki, S.; Curtis, G. P.; Ye, M.; Fang, Y.
2011-12-01
A three-dimensional, variably-saturated flow and multicomponent biogeochemical reactive transport model of uranium bioremediation was used to generate synthetic data . The 3-D model was based on a field experiment at the U.S. Dept. of Energy Rifle Integrated Field Research Challenge site that used acetate biostimulation of indigenous metal reducing bacteria to catalyze the conversion of aqueous uranium in the +6 oxidation state to immobile solid-associated uranium in the +4 oxidation state. A key assumption in past modeling studies at this site was that a comprehensive reaction network could be developed largely through one-dimensional modeling. Sensitivity analyses and parameter estimation were completed for a 1-D reactive transport model abstracted from the 3-D model to test this assumption, to identify parameters with the greatest potential to contribute to model predictive uncertainty, and to evaluate model structure and data limitations. Results showed that sensitivities of key biogeochemical concentrations varied in space and time, that model nonlinearities and/or parameter interactions have a significant impact on calculated sensitivities, and that the complexity of the model's representation of processes affecting Fe(II) in the system may make it difficult to correctly attribute observed Fe(II) behavior to modeled processes. Non-uniformity of the 3-D simulated groundwater flux and averaging of the 3-D synthetic data for use as calibration targets in the 1-D modeling resulted in systematic errors in the 1-D model parameter estimates and outputs. This occurred despite using the same reaction network for 1-D modeling as used in the data-generating 3-D model. Predictive uncertainty of the 1-D model appeared to be significantly underestimated by linear parameter uncertainty estimates.
Behmanesh, Iman; Moaveni, Babak
2016-07-01
This paper presents a Hierarchical Bayesian model updating framework to account for the effects of ambient temperature and excitation amplitude. The proposed approach is applied for model calibration, response prediction and damage identification of a footbridge under changing environmental/ambient conditions. The concrete Young's modulus of the footbridge deck is the considered updating structural parameter with its mean and variance modeled as functions of temperature and excitation amplitude. The identified modal parameters over 27 months of continuous monitoring of the footbridge are used to calibrate the updating parameters. One of the objectives of this study is to show that by increasing the levels of information in the updating process, the posterior variation of the updating structural parameter (concrete Young's modulus) is reduced. To this end, the calibration is performed at three information levels using (1) the identified modal parameters, (2) modal parameters and ambient temperatures, and (3) modal parameters, ambient temperatures, and excitation amplitudes. The calibrated model is then validated by comparing the model-predicted natural frequencies and those identified from measured data after deliberate change to the structural mass. It is shown that accounting for modeling error uncertainties is crucial for reliable response prediction, and accounting only the estimated variability of the updating structural parameter is not sufficient for accurate response predictions. Finally, the calibrated model is used for damage identification of the footbridge.
Applied parameter estimation for chemical engineers
Englezos, Peter
2000-01-01
Formulation of the parameter estimation problem; computation of parameters in linear models-linear regression; Gauss-Newton method for algebraic models; other nonlinear regression methods for algebraic models; Gauss-Newton method for ordinary differential equation (ODE) models; shortcut estimation methods for ODE models; practical guidelines for algorithm implementation; constrained parameter estimation; Gauss-Newton method for partial differential equation (PDE) models; statistical inferences; design of experiments; recursive parameter estimation; parameter estimation in nonlinear thermodynam
Directory of Open Access Journals (Sweden)
Riionheimo Janne
2003-01-01
Full Text Available We describe a technique for estimating control parameters for a plucked string synthesis model using a genetic algorithm. The model has been intensively used for sound synthesis of various string instruments but the fine tuning of the parameters has been carried out with a semiautomatic method that requires some hand adjustment with human listening. An automated method for extracting the parameters from recorded tones is described in this paper. The calculation of the fitness function utilizes knowledge of the properties of human hearing.
An evolutionary computing approach for parameter estimation investigation of a model for cholera.
Akman, Olcay; Schaefer, Elsa
2015-01-01
We consider the problem of using time-series data to inform a corresponding deterministic model and introduce the concept of genetic algorithms (GA) as a tool for parameter estimation, providing instructions for an implementation of the method that does not require access to special toolboxes or software. We give as an example a model for cholera, a disease for which there is much mechanistic uncertainty in the literature. We use GA to find parameter sets using available time-series data from the introduction of cholera in Haiti and we discuss the value of comparing multiple parameter sets with similar performances in describing the data.
Estimating Model Parameters of Adaptive Software Systems in Real-Time
Kumar, Dinesh; Tantawi, Asser; Zhang, Li
Adaptive software systems have the ability to adapt to changes in workload and execution environment. In order to perform resource management through model based control in such systems, an accurate mechanism for estimating the software system's model parameters is required. This paper deals with real-time estimation of a performance model for adaptive software systems that process multiple classes of transactional workload. First, insights in to the static performance model estimation problem are provided. Then an Extended Kalman Filter (EKF) design is combined with an open queueing network model to dynamically estimate the model parameters in real-time. Specific problems that are encountered in the case of multiple classes of workload are analyzed. These problems arise mainly due to the under-deterministic nature of the estimation problem. This motivates us to propose a modified design of the filter. Insights for choosing tuning parameters of the modified design, i.e., number of constraints and sampling intervals are provided. The modified filter design is shown to effectively tackle problems with multiple classes of workload through experiments.
DEFF Research Database (Denmark)
Chon, K H; Hoyer, D; Armoundas, A A
1999-01-01
In this study, we introduce a new approach for estimating linear and nonlinear stochastic autoregressive moving average (ARMA) model parameters, given a corrupt signal, using artificial recurrent neural networks. This new approach is a two-step approach in which the parameters of the deterministic...... error is obtained by subtracting the corrupt signal of the estimated ARMA model obtained via the deterministic estimation step from the system output response. We present computer simulation examples to show the efficacy of the proposed stochastic recurrent neural network approach in obtaining accurate...... model predictions. Furthermore, we compare the performance of the new approach to that of the deterministic recurrent neural network approach. Using this simple two-step procedure, we obtain more robust model predictions than with the deterministic recurrent neural network approach despite the presence...
DEFF Research Database (Denmark)
Chon, K H; Hoyer, D; Armoundas, A A
1999-01-01
In this study, we introduce a new approach for estimating linear and nonlinear stochastic autoregressive moving average (ARMA) model parameters, given a corrupt signal, using artificial recurrent neural networks. This new approach is a two-step approach in which the parameters of the deterministic...... part of the stochastic ARMA model are first estimated via a three-layer artificial neural network (deterministic estimation step) and then reestimated using the prediction error as one of the inputs to the artificial neural networks in an iterative algorithm (stochastic estimation step). The prediction...... of significant amounts of either dynamic or measurement noise in the output signal. The comparison between the deterministic and stochastic recurrent neural network approaches is furthered by applying both approaches to experimentally obtained renal blood pressure and flow signals....
Parameter estimation techniques and uncertainty in ground water flow model predictions
International Nuclear Information System (INIS)
Zimmerman, D.A.; Davis, P.A.
1990-01-01
Quantification of uncertainty in predictions of nuclear waste repository performance is a requirement of Nuclear Regulatory Commission regulations governing the licensing of proposed geologic repositories for high-level radioactive waste disposal. One of the major uncertainties in these predictions is in estimating the ground-water travel time of radionuclides migrating from the repository to the accessible environment. The cause of much of this uncertainty has been attributed to a lack of knowledge about the hydrogeologic properties that control the movement of radionuclides through the aquifers. A major reason for this lack of knowledge is the paucity of data that is typically available for characterizing complex ground-water flow systems. Because of this, considerable effort has been put into developing parameter estimation techniques that infer property values in regions where no measurements exist. Currently, no single technique has been shown to be superior or even consistently conservative with respect to predictions of ground-water travel time. This work was undertaken to compare a number of parameter estimation techniques and to evaluate how differences in the parameter estimates and the estimation errors are reflected in the behavior of the flow model predictions. That is, we wished to determine to what degree uncertainties in flow model predictions may be affected simply by the choice of parameter estimation technique used. 3 refs., 2 figs
Directory of Open Access Journals (Sweden)
Tashkova Katerina
2011-10-01
Full Text Available Abstract Background We address the task of parameter estimation in models of the dynamics of biological systems based on ordinary differential equations (ODEs from measured data, where the models are typically non-linear and have many parameters, the measurements are imperfect due to noise, and the studied system can often be only partially observed. A representative task is to estimate the parameters in a model of the dynamics of endocytosis, i.e., endosome maturation, reflected in a cut-out switch transition between the Rab5 and Rab7 domain protein concentrations, from experimental measurements of these concentrations. The general parameter estimation task and the specific instance considered here are challenging optimization problems, calling for the use of advanced meta-heuristic optimization methods, such as evolutionary or swarm-based methods. Results We apply three global-search meta-heuristic algorithms for numerical optimization, i.e., differential ant-stigmergy algorithm (DASA, particle-swarm optimization (PSO, and differential evolution (DE, as well as a local-search derivative-based algorithm 717 (A717 to the task of estimating parameters in ODEs. We evaluate their performance on the considered representative task along a number of metrics, including the quality of reconstructing the system output and the complete dynamics, as well as the speed of convergence, both on real-experimental data and on artificial pseudo-experimental data with varying amounts of noise. We compare the four optimization methods under a range of observation scenarios, where data of different completeness and accuracy of interpretation are given as input. Conclusions Overall, the global meta-heuristic methods (DASA, PSO, and DE clearly and significantly outperform the local derivative-based method (A717. Among the three meta-heuristics, differential evolution (DE performs best in terms of the objective function, i.e., reconstructing the output, and in terms of
2011-01-01
Background We address the task of parameter estimation in models of the dynamics of biological systems based on ordinary differential equations (ODEs) from measured data, where the models are typically non-linear and have many parameters, the measurements are imperfect due to noise, and the studied system can often be only partially observed. A representative task is to estimate the parameters in a model of the dynamics of endocytosis, i.e., endosome maturation, reflected in a cut-out switch transition between the Rab5 and Rab7 domain protein concentrations, from experimental measurements of these concentrations. The general parameter estimation task and the specific instance considered here are challenging optimization problems, calling for the use of advanced meta-heuristic optimization methods, such as evolutionary or swarm-based methods. Results We apply three global-search meta-heuristic algorithms for numerical optimization, i.e., differential ant-stigmergy algorithm (DASA), particle-swarm optimization (PSO), and differential evolution (DE), as well as a local-search derivative-based algorithm 717 (A717) to the task of estimating parameters in ODEs. We evaluate their performance on the considered representative task along a number of metrics, including the quality of reconstructing the system output and the complete dynamics, as well as the speed of convergence, both on real-experimental data and on artificial pseudo-experimental data with varying amounts of noise. We compare the four optimization methods under a range of observation scenarios, where data of different completeness and accuracy of interpretation are given as input. Conclusions Overall, the global meta-heuristic methods (DASA, PSO, and DE) clearly and significantly outperform the local derivative-based method (A717). Among the three meta-heuristics, differential evolution (DE) performs best in terms of the objective function, i.e., reconstructing the output, and in terms of convergence. These
Adaptive Model Predictive Vibration Control of a Cantilever Beam with Real-Time Parameter Estimation
Directory of Open Access Journals (Sweden)
Gergely Takács
2014-01-01
Full Text Available This paper presents an adaptive-predictive vibration control system using extended Kalman filtering for the joint estimation of system states and model parameters. A fixed-free cantilever beam equipped with piezoceramic actuators serves as a test platform to validate the proposed control strategy. Deflection readings taken at the end of the beam have been used to reconstruct the position and velocity information for a second-order state-space model. In addition to the states, the dynamic system has been augmented by the unknown model parameters: stiffness, damping constant, and a voltage/force conversion constant, characterizing the actuating effect of the piezoceramic transducers. The states and parameters of this augmented system have been estimated in real time, using the hybrid extended Kalman filter. The estimated model parameters have been applied to define the continuous state-space model of the vibrating system, which in turn is discretized for the predictive controller. The model predictive control algorithm generates state predictions and dual-mode quadratic cost prediction matrices based on the updated discrete state-space models. The resulting cost function is then minimized using quadratic programming to find the sequence of optimal but constrained control inputs. The proposed active vibration control system is implemented and evaluated experimentally to investigate the viability of the control method.
Rosenblatt, Marcus; Timmer, Jens; Kaschek, Daniel
2016-01-01
Ordinary differential equation models have become a wide-spread approach to analyze dynamical systems and understand underlying mechanisms. Model parameters are often unknown and have to be estimated from experimental data, e.g., by maximum-likelihood estimation. In particular, models of biological systems contain a large number of parameters. To reduce the dimensionality of the parameter space, steady-state information is incorporated in the parameter estimation process. For non-linear models, analytical steady-state calculation typically leads to higher-order polynomial equations for which no closed-form solutions can be obtained. This can be circumvented by solving the steady-state equations for kinetic parameters, which results in a linear equation system with comparatively simple solutions. At the same time multiplicity of steady-state solutions is avoided, which otherwise is problematic for optimization. When solved for kinetic parameters, however, steady-state constraints tend to become negative for particular model specifications, thus, generating new types of optimization problems. Here, we present an algorithm based on graph theory that derives non-negative, analytical steady-state expressions by stepwise removal of cyclic dependencies between dynamical variables. The algorithm avoids multiple steady-state solutions by construction. We show that our method is applicable to most common classes of biochemical reaction networks containing inhibition terms, mass-action and Hill-type kinetic equations. Comparing the performance of parameter estimation for different analytical and numerical methods of incorporating steady-state information, we show that our approach is especially well-tailored to guarantee a high success rate of optimization.
Parameter Estimation for Traffic Noise Models Using a Harmony Search Algorithm
Directory of Open Access Journals (Sweden)
Deok-Soon An
2013-01-01
Full Text Available A technique has been developed for predicting road traffic noise for environmental assessment, taking into account traffic volume as well as road surface conditions. The ASJ model (ASJ Prediction Model for Road Traffic Noise, 1999, which is based on the sound power level of the noise emitted by the interaction between the road surface and tires, employs regression models for two road surface types: dense-graded asphalt (DGA and permeable asphalt (PA. However, these models are not applicable to other types of road surfaces. Accordingly, this paper introduces a parameter estimation procedure for ASJ-based noise prediction models, utilizing a harmony search (HS algorithm. Traffic noise measurement data for four different vehicle types were used in the algorithm to determine the regression parameters for several road surface types. The parameters of the traffic noise prediction models were evaluated using another measurement set, and good agreement was observed between the predicted and measured sound power levels.
Brouwer, Andrew F; Meza, Rafael; Eisenberg, Marisa C
2017-03-01
Many cancers are understood to be the product of multiple somatic mutations or other rate-limiting events. Multistage clonal expansion (MSCE) models are a class of continuous-time Markov chain models that capture the multi-hit initiation-promotion-malignant-conversion hypothesis of carcinogenesis. These models have been used broadly to investigate the epidemiology of many cancers, assess the impact of carcinogen exposures on cancer risk, and evaluate the potential impact of cancer prevention and control strategies on cancer rates. Structural identifiability (the analysis of the maximum parametric information available for a model given perfectly measured data) of certain MSCE models has been previously investigated. However, structural identifiability is a theoretical property and does not address the limitations of real data. In this study, we use pancreatic cancer as a case study to examine the practical identifiability of the two-, three-, and four-stage clonal expansion models given age-specific cancer incidence data using a numerical profile-likelihood approach. We demonstrate that, in the case of the three- and four-stage models, several parameters that are theoretically structurally identifiable, are, in practice, unidentifiable. This result means that key parameters such as the intermediate cell mutation rates are not individually identifiable from the data and that estimation of those parameters, even if structurally identifiable, will not be stable. We also show that products of these practically unidentifiable parameters are practically identifiable, and, based on this, we propose new reparameterizations of the model hazards that resolve the parameter estimation problems. Our results highlight the importance of identifiability to the interpretation of model parameter estimates.
Modelling the flyway of arctic breeding shorebirds; parameter estimation and sensitivity analysis
Ens, B.J.; Schekkerman, H.; Tulp, I.Y.M.; Bauer, S.; Klaassen, M.
2006-01-01
This report describes the derivation of parameter estimates for the model DYNAMIG for an arctic breeding shorebird, the Knot. DYNAMIG predicts the optimal spring migration of birds, like shorebirds and geese, that depend of a chain of discrete sites, to travel between their breeding grounds and
An MLE method for finding LKB NTCP model parameters using Monte Carlo uncertainty estimates
Carolan, Martin; Oborn, Brad; Foo, Kerwyn; Haworth, Annette; Gulliford, Sarah; Ebert, Martin
2014-03-01
The aims of this work were to establish a program to fit NTCP models to clinical data with multiple toxicity endpoints, to test the method using a realistic test dataset, to compare three methods for estimating confidence intervals for the fitted parameters and to characterise the speed and performance of the program.
A review of parameter estimation used in solar photovoltaic system for a single diode model
Sabudin, Siti Nurashiken Md; Jamil, Norazaliza Mohd; Rosli, Norhayati
2017-09-01
With increased demand for theoretical solar energy, the mathematical modelling of the solar photovoltaic (PV) system has gained importance. Numerous mathematical models have been developed for different purposes. In this paper, we briefly review the progress made in the mathematical modelling of solar photovoltaic (PV) system over the last twenty years. First, a general classification of these models is made. Then, the basic characteristics of the models along with the objectives and different parameters considered in modelling are discussed. The assumptions and approximations made also parameter estimation method in solving the models are summarized. This may facilitate the mathematicians to adopt better understanding of the modelling strategies and further to develop suitable models in this direction relevant to the present scenario.
Widyaningsih, Purnami; Retno Sari Saputro, Dewi; Nugrahani Putri, Aulia
2017-06-01
GWOLR model combines geographically weighted regression (GWR) and (ordinal logistic reression) OLR models. Its parameter estimation employs maximum likelihood estimation. Such parameter estimation, however, yields difficult-to-solve system of nonlinear equations, and therefore numerical approximation approach is required. The iterative approximation approach, in general, uses Newton-Raphson (NR) method. The NR method has a disadvantage—its Hessian matrix is always the second derivatives of each iteration so it does not always produce converging results. With regard to this matter, NR model is modified by substituting its Hessian matrix into Fisher information matrix, which is termed Fisher scoring (FS). The present research seeks to determine GWOLR model parameter estimation using Fisher scoring method and apply the estimation on data of the level of vulnerability to Dengue Hemorrhagic Fever (DHF) in Semarang. The research concludes that health facilities give the greatest contribution to the probability of the number of DHF sufferers in both villages. Based on the number of the sufferers, IR category of DHF in both villages can be determined.
Reimer, Joscha; Piwonski, Jaroslaw; Slawig, Thomas
2016-04-01
The statistical significance of any model-data comparison strongly depends on the quality of the used data and the criterion used to measure the model-to-data misfit. The statistical properties (such as mean values, variances and covariances) of the data should be taken into account by choosing a criterion as, e.g., ordinary, weighted or generalized least squares. Moreover, the criterion can be restricted onto regions or model quantities which are of special interest. This choice influences the quality of the model output (also for not measured quantities) and the results of a parameter estimation or optimization process. We have estimated the parameters of a three-dimensional and time-dependent marine biogeochemical model describing the phosphorus cycle in the ocean. For this purpose, we have developed a statistical model for measurements of phosphate and dissolved organic phosphorus. This statistical model includes variances and correlations varying with time and location of the measurements. We compared the obtained estimations of model output and parameters for different criteria. Another question is if (and which) further measurements would increase the model's quality at all. Using experimental design criteria, the information content of measurements can be quantified. This may refer to the uncertainty in unknown model parameters as well as the uncertainty regarding which model is closer to reality. By (another) optimization, optimal measurement properties such as locations, time instants and quantities to be measured can be identified. We have optimized such properties for additional measurement for the parameter estimation of the marine biogeochemical model. For this purpose, we have quantified the uncertainty in the optimal model parameters and the model output itself regarding the uncertainty in the measurement data using the (Fisher) information matrix. Furthermore, we have calculated the uncertainty reduction by additional measurements depending on time
Directory of Open Access Journals (Sweden)
He Wang
2018-01-01
Full Text Available An effective method is proposed to estimate the parameters of a dynamic grain flow model (DGFM. To this end, an improved artificial bee colony (IABC algorithm is used to estimate unknown parameters of DGFM with minimizing a given objective function. A comparative study of the performance of the IABC algorithm and the other ABC variants on several benchmark functions is carried out, and the results present a significant improvement in performance over the other ABC variants. The practical application performance of the IABC is compared to that of the nonlinear least squares (NLS, particle swarm optimization (PSO, and genetic algorithm (GA. The compared results demonstrate that IABC algorithm is more accurate and effective for the parameter estimation of DGFM than the other algorithms.
Estimation of k-ε parameters using surrogate models and jet-in-crossflow data
Energy Technology Data Exchange (ETDEWEB)
Lefantzi, Sophia [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Ray, Jaideep [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Arunajatesan, Srinivasan [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Dechant, Lawrence [Sandia National Lab. (SNL-CA), Livermore, CA (United States)
2014-11-01
We demonstrate a Bayesian method that can be used to calibrate computationally expensive 3D RANS (Reynolds Av- eraged Navier Stokes) models with complex response surfaces. Such calibrations, conditioned on experimental data, can yield turbulence model parameters as probability density functions (PDF), concisely capturing the uncertainty in the parameter estimates. Methods such as Markov chain Monte Carlo (MCMC) estimate the PDF by sampling, with each sample requiring a run of the RANS model. Consequently a quick-running surrogate is used instead to the RANS simulator. The surrogate can be very difficult to design if the model's response i.e., the dependence of the calibration variable (the observable) on the parameter being estimated is complex. We show how the training data used to construct the surrogate can be employed to isolate a promising and physically realistic part of the parameter space, within which the response is well-behaved and easily modeled. We design a classifier, based on treed linear models, to model the "well-behaved region". This classifier serves as a prior in a Bayesian calibration study aimed at estimating 3 k - ε parameters ( C _{μ}, C _{ε2} , C _{ε1} ) from experimental data of a transonic jet-in-crossflow interaction. The robustness of the calibration is investigated by checking its predictions of variables not included in the cal- ibration data. We also check the limit of applicability of the calibration by testing at off-calibration flow regimes. We find that calibration yield turbulence model parameters which predict the flowfield far better than when the nomi- nal values of the parameters are used. Substantial improvements are still obtained when we use the calibrated RANS model to predict jet-in-crossflow at Mach numbers and jet strengths quite different from those used to generate the ex- perimental (calibration) data. Thus the primary reason for poor predictive skill of RANS, when using nominal
Welter, David E.; Doherty, John E.; Hunt, Randall J.; Muffels, Christopher T.; Tonkin, Matthew J.; Schreuder, Willem A.
2012-01-01
An object-oriented parameter estimation code was developed to incorporate benefits of object-oriented programming techniques for solving large parameter estimation modeling problems. The code is written in C++ and is a formulation and expansion of the algorithms included in PEST, a widely used parameter estimation code written in Fortran. The new code is called PEST++ and is designed to lower the barriers of entry for users and developers while providing efficient algorithms that can accommodate large, highly parameterized problems. This effort has focused on (1) implementing the most popular features of PEST in a fashion that is easy for novice or experienced modelers to use and (2) creating a software design that is easy to extend; that is, this effort provides a documented object-oriented framework designed from the ground up to be modular and extensible. In addition, all PEST++ source code and its associated libraries, as well as the general run manager source code, have been integrated in the Microsoft Visual Studio® 2010 integrated development environment. The PEST++ code is designed to provide a foundation for an open-source development environment capable of producing robust and efficient parameter estimation tools for the environmental modeling community into the future.
Parameter estimation of internal thermal mass of building dynamic models using genetic algorithm
International Nuclear Information System (INIS)
Wang Shengwei; Xu Xinhua
2006-01-01
Building thermal transfer models are essential to predict transient cooling or heating requirements for performance monitoring, diagnosis and control strategy analysis. Detailed physical models are time consuming and often not cost effective. Black box models require a significant amount of training data and may not always reflect the physical behaviors. In this study, a building is described using a simplified thermal network model. For the building envelope, the model parameters can be determined using easily available physical details. For building internal mass having thermal capacitance, including components such as furniture, partitions etc., it is very difficult to obtain detailed physical properties. To overcome this problem, this paper proposes to present the building internal mass with a thermal network structure of lumped thermal mass and estimate the lumped parameters using operation data. A genetic algorithm estimator is developed to estimate the lumped internal thermal parameters of the building thermal network model using the operation data collected from site monitoring. The simplified dynamic model of building internal mass is validated in different weather conditions
Parameter Estimation of a Delay Time Model of Wearing Parts Based on Objective Data
Directory of Open Access Journals (Sweden)
Y. Tang
2015-01-01
Full Text Available The wearing parts of a system have a very high failure frequency, making it necessary to carry out continual functional inspections and maintenance to protect the system from unscheduled downtime. This allows for the collection of a large amount of maintenance data. Taking the unique characteristics of the wearing parts into consideration, we establish their respective delay time models in ideal inspection cases and nonideal inspection cases. The model parameters are estimated entirely using the collected maintenance data. Then, a likelihood function of all renewal events is derived based on their occurring probability functions, and the model parameters are calculated with the maximum likelihood function method, which is solved by the CRM. Finally, using two wearing parts from the oil and gas drilling industry as examples—the filter element and the blowout preventer rubber core—the parameters of the distribution function of the initial failure time and the delay time for each example are estimated, and their distribution functions are obtained. Such parameter estimation based on objective data will contribute to the optimization of the reasonable function inspection interval and will also provide some theoretical models to support the integrity management of equipment or systems.
Estimation of Key Parameters of the Coupled Energy and Water Model by Assimilating Land Surface Data
Abdolghafoorian, A.; Farhadi, L.
2017-12-01
Accurate estimation of land surface heat and moisture fluxes, as well as root zone soil moisture, is crucial in various hydrological, meteorological, and agricultural applications. Field measurements of these fluxes are costly and cannot be readily scaled to large areas relevant to weather and climate studies. Therefore, there is a need for techniques to make quantitative estimates of heat and moisture fluxes using land surface state observations that are widely available from remote sensing across a range of scale. In this work, we applies the variational data assimilation approach to estimate land surface fluxes and soil moisture profile from the implicit information contained Land Surface Temperature (LST) and Soil Moisture (SM) (hereafter the VDA model). The VDA model is focused on the estimation of three key parameters: 1- neutral bulk heat transfer coefficient (CHN), 2- evaporative fraction from soil and canopy (EF), and 3- saturated hydraulic conductivity (Ksat). CHN and EF regulate the partitioning of available energy between sensible and latent heat fluxes. Ksat is one of the main parameters used in determining infiltration, runoff, groundwater recharge, and in simulating hydrological processes. In this study, a system of coupled parsimonious energy and water model will constrain the estimation of three unknown parameters in the VDA model. The profile of SM (LST) at multiple depths is estimated using moisture diffusion (heat diffusion) equation. In this study, the uncertainties of retrieved unknown parameters and fluxes are estimated from the inverse of Hesian matrix of cost function which is computed using the Lagrangian methodology. Analysis of uncertainty provides valuable information about the accuracy of estimated parameters and their correlation and guide the formulation of a well-posed estimation problem. The results of proposed algorithm are validated with a series of experiments using a synthetic data set generated by the simultaneous heat and
Parameter estimation in food science.
Dolan, Kirk D; Mishra, Dharmendra K
2013-01-01
Modeling includes two distinct parts, the forward problem and the inverse problem. The forward problem-computing y(t) given known parameters-has received much attention, especially with the explosion of commercial simulation software. What is rarely made clear is that the forward results can be no better than the accuracy of the parameters. Therefore, the inverse problem-estimation of parameters given measured y(t)-is at least as important as the forward problem. However, in the food science literature there has been little attention paid to the accuracy of parameters. The purpose of this article is to summarize the state of the art of parameter estimation in food science, to review some of the common food science models used for parameter estimation (for microbial inactivation and growth, thermal properties, and kinetics), and to suggest a generic method to standardize parameter estimation, thereby making research results more useful. Scaled sensitivity coefficients are introduced and shown to be important in parameter identifiability. Sequential estimation and optimal experimental design are also reviewed as powerful parameter estimation methods that are beginning to be used in the food science literature.
Directory of Open Access Journals (Sweden)
Khaled MAMMAR
2013-11-01
Full Text Available In this paper, a new approach based on Experimental of design methodology (DoE is used to estimate the optimal of unknown model parameters proton exchange membrane fuel cell (PEMFC. This proposed approach combines the central composite face-centered (CCF and numerical PEMFC electrochemical. Simulation results obtained using electrochemical model help to predict the cell voltage in terms of inlet partial pressures of hydrogen and oxygen, stack temperature, and operating current. The value of the previous model and (CCF design methodology is used for parametric analysis of electrochemical model. Thus it is possible to evaluate the relative importance of each parameter to the simulation accuracy. However this methodology is able to define the exact values of the parameters from the manufacture data. It was tested for the BCS 500-W stack PEM Generator, a stack rated at 500 W, manufactured by American Company BCS Technologies FC.
Boskova, Veronika; Stadler, Tanja; Magnus, Carsten
2018-01-01
Each new virus introduced into the human population could potentially spread and cause a worldwide epidemic. Thus, early quantification of epidemic spread is crucial. Real-time sequencing followed by Bayesian phylodynamic analysis has proven to be extremely informative in this respect. Bayesian phylodynamic analyses require a model to be chosen and prior distributions on model parameters to be specified. We study here how choices regarding the tree prior influence quantification of epidemic spread in an emerging epidemic by focusing on estimates of the parameters clock rate, tree height, and reproductive number in the currently ongoing Zika virus epidemic in the Americas. While parameter estimates are quite robust to reasonable variations in the model settings when studying the complete data set, it is impossible to obtain unequivocal estimates when reducing the data to local Zika epidemics in Brazil and Florida, USA. Beyond the empirical insights, this study highlights the conceptual differences between the so-called birth-death and coalescent tree priors: while sequence sampling times alone can strongly inform the tree height and reproductive number under a birth-death model, the coalescent tree height prior is typically only slightly influenced by this information. Such conceptual differences together with non-trivial interactions of different priors complicate proper interpretation of empirical results. Overall, our findings indicate that phylodynamic analyses of early viral spread data must be carried out with care as data sets may not necessarily be informative enough yet to provide estimates robust to prior settings. It is necessary to do a robustness check of these data sets by scanning several models and prior distributions. Only if the posterior distributions are robust to reasonable changes of the prior distribution, the parameter estimates can be trusted. Such robustness tests will help making real-time phylodynamic analyses of spreading epidemic more
Hydrological model performance and parameter estimation in the wavelet-domain
Directory of Open Access Journals (Sweden)
B. Schaefli
2009-10-01
Full Text Available This paper proposes a method for rainfall-runoff model calibration and performance analysis in the wavelet-domain by fitting the estimated wavelet-power spectrum (a representation of the time-varying frequency content of a time series of a simulated discharge series to the one of the corresponding observed time series. As discussed in this paper, calibrating hydrological models so as to reproduce the time-varying frequency content of the observed signal can lead to different results than parameter estimation in the time-domain. Therefore, wavelet-domain parameter estimation has the potential to give new insights into model performance and to reveal model structural deficiencies. We apply the proposed method to synthetic case studies and a real-world discharge modeling case study and discuss how model diagnosis can benefit from an analysis in the wavelet-domain. The results show that for the real-world case study of precipitation – runoff modeling for a high alpine catchment, the calibrated discharge simulation captures the dynamics of the observed time series better than the results obtained through calibration in the time-domain. In addition, the wavelet-domain performance assessment of this case study highlights the frequencies that are not well reproduced by the model, which gives specific indications about how to improve the model structure.
An iterative stochastic ensemble method for parameter estimation of subsurface flow models
International Nuclear Information System (INIS)
Elsheikh, Ahmed H.; Wheeler, Mary F.; Hoteit, Ibrahim
2013-01-01
Parameter estimation for subsurface flow models is an essential step for maximizing the value of numerical simulations for future prediction and the development of effective control strategies. We propose the iterative stochastic ensemble method (ISEM) as a general method for parameter estimation based on stochastic estimation of gradients using an ensemble of directional derivatives. ISEM eliminates the need for adjoint coding and deals with the numerical simulator as a blackbox. The proposed method employs directional derivatives within a Gauss–Newton iteration. The update equation in ISEM resembles the update step in ensemble Kalman filter, however the inverse of the output covariance matrix in ISEM is regularized using standard truncated singular value decomposition or Tikhonov regularization. We also investigate the performance of a set of shrinkage based covariance estimators within ISEM. The proposed method is successfully applied on several nonlinear parameter estimation problems for subsurface flow models. The efficiency of the proposed algorithm is demonstrated by the small size of utilized ensembles and in terms of error convergence rates
An iterative stochastic ensemble method for parameter estimation of subsurface flow models
Elsheikh, Ahmed H.
2013-06-01
Parameter estimation for subsurface flow models is an essential step for maximizing the value of numerical simulations for future prediction and the development of effective control strategies. We propose the iterative stochastic ensemble method (ISEM) as a general method for parameter estimation based on stochastic estimation of gradients using an ensemble of directional derivatives. ISEM eliminates the need for adjoint coding and deals with the numerical simulator as a blackbox. The proposed method employs directional derivatives within a Gauss-Newton iteration. The update equation in ISEM resembles the update step in ensemble Kalman filter, however the inverse of the output covariance matrix in ISEM is regularized using standard truncated singular value decomposition or Tikhonov regularization. We also investigate the performance of a set of shrinkage based covariance estimators within ISEM. The proposed method is successfully applied on several nonlinear parameter estimation problems for subsurface flow models. The efficiency of the proposed algorithm is demonstrated by the small size of utilized ensembles and in terms of error convergence rates. © 2013 Elsevier Inc.
Parameter and state estimation in a Neisseria meningitidis model: A study case of Niger
Bowong, S.; Mountaga, L.; Bah, A.; Tewa, J. J.; Kurths, J.
2016-12-01
Neisseria meningitidis (Nm) is a major cause of bacterial meningitidis outbreaks in Africa and the Middle East. The availability of yearly reported meningitis cases in the African meningitis belt offers the opportunity to analyze the transmission dynamics and the impact of control strategies. In this paper, we propose a method for the estimation of state variables that are not accessible to measurements and an unknown parameter in a Nm model. We suppose that the yearly number of Nm induced mortality and the total population are known inputs, which can be obtained from data, and the yearly number of new Nm cases is the model output. We also suppose that the Nm transmission rate is an unknown parameter. We first show how the recruitment rate into the population can be estimated using real data of the total population and Nm induced mortality. Then, we use an auxiliary system called observer whose solutions converge exponentially to those of the original model. This observer does not use the unknown infection transmission rate but only uses the known inputs and the model output. This allows us to estimate unmeasured state variables such as the number of carriers that play an important role in the transmission of the infection and the total number of infected individuals within a human community. Finally, we also provide a simple method to estimate the unknown Nm transmission rate. In order to validate the estimation results, numerical simulations are conducted using real data of Niger.
A Ramp Cosine Cepstrum Model for the Parameter Estimation of Autoregressive Systems at Low SNR
Directory of Open Access Journals (Sweden)
Shaikh Anowarul Fattah
2010-01-01
Full Text Available A new cosine cepstrum model-based scheme is presented for the parameter estimation of a minimum-phase autoregressive (AR system under low levels of signal-to-noise ratio (SNR. A ramp cosine cepstrum (RCC model for the one-sided autocorrelation function (OSACF of an AR signal is first proposed by considering both white noise and periodic impulse-train excitations. Using the RCC model, a residue-based least-squares optimization technique that guarantees the stability of the system is then presented in order to estimate the AR parameters from noisy output observations. For the purpose of implementation, the discrete cosine transform, which can efficiently handle the phase unwrapping problem and offer computational advantages as compared to the discrete Fourier transform, is employed. From extensive experimentations on AR systems of different orders, it is shown that the proposed method is capable of estimating parameters accurately and consistently in comparison to some of the existing methods for the SNR levels as low as −5 dB. As a practical application of the proposed technique, simulation results are also provided for the identification of a human vocal tract system using noise-corrupted natural speech signals demonstrating a superior estimation performance in terms of the power spectral density of the synthesized speech signals.
Maximum likelihood estimation of the parameters of nonminimum phase and noncausal ARMA models
DEFF Research Database (Denmark)
Rasmussen, Klaus Bolding
1994-01-01
The well-known prediction-error-based maximum likelihood (PEML) method can only handle minimum phase ARMA models. This paper presents a new method known as the back-filtering-based maximum likelihood (BFML) method, which can handle nonminimum phase and noncausal ARMA models. The BFML method...... is identical to the PEML method in the case of a minimum phase ARMA model, and it turns out that the BFML method incorporates a noncausal ARMA filter with poles outside the unit circle for estimation of the parameters of a causal, nonminimum phase ARMA model...
The limiting behavior of the estimated parameters in a misspecified random field regression model
DEFF Research Database (Denmark)
Dahl, Christian Møller; Qin, Yu
This paper examines the limiting properties of the estimated parameters in the random field regression model recently proposed by Hamilton (Econometrica, 2001). Though the model is parametric, it enjoys the flexibility of the nonparametric approach since it can approximate a large collection...... convenient new uniform convergence results that we propose. This theory may have applications beyond those presented here. Our results indicate that classical statistical inference techniques, in general, works very well for random field regression models in finite samples and that these models succesfully...
Bayesian parameter estimation for the Wnt pathway: an infinite mixture models approach.
Koutroumpas, Konstantinos; Ballarini, Paolo; Votsi, Irene; Cournède, Paul-Henry
2016-09-01
Likelihood-free methods, like Approximate Bayesian Computation (ABC), have been extensively used in model-based statistical inference with intractable likelihood functions. When combined with Sequential Monte Carlo (SMC) algorithms they constitute a powerful approach for parameter estimation and model selection of mathematical models of complex biological systems. A crucial step in the ABC-SMC algorithms, significantly affecting their performance, is the propagation of a set of parameter vectors through a sequence of intermediate distributions using Markov kernels. In this article, we employ Dirichlet process mixtures (DPMs) to design optimal transition kernels and we present an ABC-SMC algorithm with DPM kernels. We illustrate the use of the proposed methodology using real data for the canonical Wnt signaling pathway. A multi-compartment model of the pathway is developed and it is compared to an existing model. The results indicate that DPMs are more efficient in the exploration of the parameter space and can significantly improve ABC-SMC performance. In comparison to alternative sampling schemes that are commonly used, the proposed approach can bring potential benefits in the estimation of complex multimodal distributions. The method is used to estimate the parameters and the initial state of two models of the Wnt pathway and it is shown that the multi-compartment model fits better the experimental data. Python scripts for the Dirichlet Process Gaussian Mixture model and the Gibbs sampler are available at https://sites.google.com/site/kkoutroumpas/software konstantinos.koutroumpas@ecp.fr. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Stochastic models and reliability parameter estimation applicable to nuclear power plant safety
International Nuclear Information System (INIS)
Mitra, S.P.
1979-01-01
A set of stochastic models and related estimation schemes for reliability parameters are developed. The models are applicable for evaluating reliability of nuclear power plant systems. Reliability information is extracted from model parameters which are estimated from the type and nature of failure data that is generally available or could be compiled in nuclear power plants. Principally, two aspects of nuclear power plant reliability have been investigated: (1) The statistical treatment of inplant component and system failure data; (2) The analysis and evaluation of common mode failures. The model inputs are failure data which have been classified as either the time type of failure data or the demand type of failure data. Failures of components and systems in nuclear power plant are, in general, rare events.This gives rise to sparse failure data. Estimation schemes for treating sparse data, whenever necessary, have been considered. The following five problems have been studied: 1) Distribution of sparse failure rate component data. 2) Failure rate inference and reliability prediction from time type of failure data. 3) Analyses of demand type of failure data. 4) Common mode failure model applicable to time type of failure data. 5) Estimation of common mode failures from 'near-miss' demand type of failure data
Joint Parameter Estimation for the Two-Wave with Diffuse Power Fading Model
Directory of Open Access Journals (Sweden)
Jesus Lopez-Fernandez
2016-06-01
Full Text Available Wireless sensor networks deployed within metallic cavities are known to suffer from a very severe fading, even in strong line-of-sight propagation conditions. This behavior is well-captured by the Two-Wave with Diffuse Power (TWDP fading distribution, which shows great fit to field measurements in such scenarios. In this paper, we address the joint estimation of the parameters K and Δ that characterize the TWDP fading model, based on the observation of the received signal envelope. We use a moment-based approach to derive closed-form expressions for the estimators of K and Δ, as well as closed-form expressions for their asymptotic variance. Results show that the estimation error is close to the Cramer-Rao lower bound for a wide range of values of the parameters K and Δ. The performance degradation due to a finite number of observations is also analyzed.
A multi-task learning approach for compartmental model parameter estimation in DCE-CT sequences.
Romain, Blandine; Letort, Véronique; Lucidarme, Olivier; Rouet, Laurence; Dalché-Buc, Florence
2013-01-01
Today's follow-up of patients presenting abdominal tumors is generally performed through acquisition of dynamic sequences of contrast-enhanced CT. Estimating parameters of appropriate models of contrast intake diffusion through tissues should help characterizing the tumor physiology, but is impeded by the high level of noise inherent to the acquisition conditions. To improve the quality of estimation, we consider parameter estimation in voxels as a multi-task learning problem (one task per voxel) that takes advantage from the similarity between two tasks. We introduce a temporal similarity between tasks based on a robust distance between observed contrast-intake profiles of intensity. Using synthetic images, we compare multi-task learning using this temporal similarity, a spatial similarity and a single-task learning. The similarities based on temporal profiles are shown to bring significant improvements compared to the spatial one. Results on real CT sequences also confirm the relevance of the approach.
Amalia, Junita; Purhadi, Otok, Bambang Widjanarko
2017-11-01
Poisson distribution is a discrete distribution with count data as the random variables and it has one parameter defines both mean and variance. Poisson regression assumes mean and variance should be same (equidispersion). Nonetheless, some case of the count data unsatisfied this assumption because variance exceeds mean (over-dispersion). The ignorance of over-dispersion causes underestimates in standard error. Furthermore, it causes incorrect decision in the statistical test. Previously, paired count data has a correlation and it has bivariate Poisson distribution. If there is over-dispersion, modeling paired count data is not sufficient with simple bivariate Poisson regression. Bivariate Poisson Inverse Gaussian Regression (BPIGR) model is mix Poisson regression for modeling paired count data within over-dispersion. BPIGR model produces a global model for all locations. In another hand, each location has different geographic conditions, social, cultural and economic so that Geographically Weighted Regression (GWR) is needed. The weighting function of each location in GWR generates a different local model. Geographically Weighted Bivariate Poisson Inverse Gaussian Regression (GWBPIGR) model is used to solve over-dispersion and to generate local models. Parameter estimation of GWBPIGR model obtained by Maximum Likelihood Estimation (MLE) method. Meanwhile, hypothesis testing of GWBPIGR model acquired by Maximum Likelihood Ratio Test (MLRT) method.
A cooperative strategy for parameter estimation in large scale systems biology models.
Villaverde, Alejandro F; Egea, Jose A; Banga, Julio R
2012-06-22
Mathematical models play a key role in systems biology: they summarize the currently available knowledge in a way that allows to make experimentally verifiable predictions. Model calibration consists of finding the parameters that give the best fit to a set of experimental data, which entails minimizing a cost function that measures the goodness of this fit. Most mathematical models in systems biology present three characteristics which make this problem very difficult to solve: they are highly non-linear, they have a large number of parameters to be estimated, and the information content of the available experimental data is frequently scarce. Hence, there is a need for global optimization methods capable of solving this problem efficiently. A new approach for parameter estimation of large scale models, called Cooperative Enhanced Scatter Search (CeSS), is presented. Its key feature is the cooperation between different programs ("threads") that run in parallel in different processors. Each thread implements a state of the art metaheuristic, the enhanced Scatter Search algorithm (eSS). Cooperation, meaning information sharing between threads, modifies the systemic properties of the algorithm and allows to speed up performance. Two parameter estimation problems involving models related with the central carbon metabolism of E. coli which include different regulatory levels (metabolic and transcriptional) are used as case studies. The performance and capabilities of the method are also evaluated using benchmark problems of large-scale global optimization, with excellent results. The cooperative CeSS strategy is a general purpose technique that can be applied to any model calibration problem. Its capability has been demonstrated by calibrating two large-scale models of different characteristics, improving the performance of previously existing methods in both cases. The cooperative metaheuristic presented here can be easily extended to incorporate other global and
A cooperative strategy for parameter estimation in large scale systems biology models
Directory of Open Access Journals (Sweden)
Villaverde Alejandro F
2012-06-01
Full Text Available Abstract Background Mathematical models play a key role in systems biology: they summarize the currently available knowledge in a way that allows to make experimentally verifiable predictions. Model calibration consists of finding the parameters that give the best fit to a set of experimental data, which entails minimizing a cost function that measures the goodness of this fit. Most mathematical models in systems biology present three characteristics which make this problem very difficult to solve: they are highly non-linear, they have a large number of parameters to be estimated, and the information content of the available experimental data is frequently scarce. Hence, there is a need for global optimization methods capable of solving this problem efficiently. Results A new approach for parameter estimation of large scale models, called Cooperative Enhanced Scatter Search (CeSS, is presented. Its key feature is the cooperation between different programs (“threads” that run in parallel in different processors. Each thread implements a state of the art metaheuristic, the enhanced Scatter Search algorithm (eSS. Cooperation, meaning information sharing between threads, modifies the systemic properties of the algorithm and allows to speed up performance. Two parameter estimation problems involving models related with the central carbon metabolism of E. coli which include different regulatory levels (metabolic and transcriptional are used as case studies. The performance and capabilities of the method are also evaluated using benchmark problems of large-scale global optimization, with excellent results. Conclusions The cooperative CeSS strategy is a general purpose technique that can be applied to any model calibration problem. Its capability has been demonstrated by calibrating two large-scale models of different characteristics, improving the performance of previously existing methods in both cases. The cooperative metaheuristic presented here
Genetic Algorithms for a Parameter Estimation of a Fermentation Process Model: A Comparison
Directory of Open Access Journals (Sweden)
Olympia Roeva
2005-12-01
Full Text Available In this paper the problem of a parameter estimation using genetic algorithms is examined. A case study considering the estimation of 6 parameters of a nonlinear dynamic model of E. coli fermentation is presented as a test problem. The parameter estimation problem is stated as a nonlinear programming problem subject to nonlinear differential-algebraic constraints. This problem is known to be frequently ill-conditioned and multimodal. Thus, traditional (gradient-based local optimization methods fail to arrive satisfied solutions. To overcome their limitations, the use of different genetic algorithms as stochastic global optimization methods is explored. These algorithms are proved to be very suitable for the optimization of highly non-linear problems with many variables. Genetic algorithms can guarantee global optimality and robustness. These facts make them advantageous in use for parameter identification of fermentation models. A comparison between simple, modified and multi-population genetic algorithms is presented. The best result is obtained using the modified genetic algorithm. The considered algorithms converged very closely to the cost value but the modified algorithm is in times faster than other two.
Directory of Open Access Journals (Sweden)
THANH TUNG KHUAT
2017-05-01
Full Text Available Artificial Bee Colony inspired by the foraging behaviour of honey bees is a novel meta-heuristic optimization algorithm in the community of swarm intelligence algorithms. Nevertheless, it is still insufficient in the speed of convergence and the quality of solutions. This paper proposes an approach in order to tackle these downsides by combining the positive aspects of TeachingLearning based optimization and Artificial Bee Colony. The performance of the proposed method is assessed on the software effort estimation problem, which is the complex and important issue in the project management. Software developers often carry out the software estimation in the early stages of the software development life cycle to derive the required cost and schedule for a project. There are a large number of methods for effort estimation in which COCOMO II is one of the most widely used models. However, this model has some restricts because its parameters have not been optimized yet. In this work, therefore, we will present the approach to overcome this limitation of COCOMO II model. The experiments have been conducted on NASA software project dataset and the obtained results indicated that the improvement of parameters provided better estimation capabilities compared to the original COCOMO II model.
A new model to estimate insulin resistance via clinical parameters in adults with type 1 diabetes.
Zheng, Xueying; Huang, Bin; Luo, Sihui; Yang, Daizhi; Bao, Wei; Li, Jin; Yao, Bin; Weng, Jianping; Yan, Jinhua
2017-05-01
Insulin resistance (IR) is a risk factor to assess the development of micro- and macro-vascular complications in type 1 diabetes (T1D). However, diabetes management in adults with T1D is limited by the difficulty of lacking simple and reliable methods to estimate insulin resistance. The aim of this study was to develop a new model to estimate IR via clinical parameters in adults with T1D. A total of 36 adults with adulthood onset T1D (n = 20) or childhood onset T1D (n = 16) were recruited by quota sampling. After an overnight insulin infusion to stabilize the blood glucose at 5.6 to 7.8 mmol/L, they underwent a 180-minute euglycemic-hyperinsulinemic clamp. Glucose disposal rate (GDR, mg kg -1 min -1 ) was calculated by data collected from the last 30 minutes during the test. Demographic factors (age, sex, and diabetes duration) and metabolic parameters (blood pressure, glycated hemoglobin A 1c [HbA 1c ], waist to hip ratio [WHR], and lipids) were collected to evaluate insulin resistance. Then, age at diabetes onset and clinical parameters were used to develop a model to estimate lnGDR by stepwise linear regression. From the stepwise process, a best model to estimate insulin resistance was generated, including HbA 1c , diastolic blood pressure, and WHR. Age at diabetes onset did not enter any of the models. We proposed the following new model to estimate IR as in GDR for adults with T1D: lnGDR = 4.964 - 0.121 × HbA 1c (%) - 0.012 × diastolic blood pressure (mmHg) - 1.409 × WHR, (adjusted R 2 = 0.616, P Insulin resistance in adults living with T1D can be estimated using routinely collected clinical parameters. This simple model provides a potential tool for estimating IR in large-scale epidemiological studies of adults with T1D regardless of age at onset. Copyright © 2016 John Wiley & Sons, Ltd.
Trap configuration and spacing influences parameter estimates in spatial capture-recapture models.
Directory of Open Access Journals (Sweden)
Catherine C Sun
Full Text Available An increasing number of studies employ spatial capture-recapture models to estimate population size, but there has been limited research on how different spatial sampling designs and trap configurations influence parameter estimators. Spatial capture-recapture models provide an advantage over non-spatial models by explicitly accounting for heterogeneous detection probabilities among individuals that arise due to the spatial organization of individuals relative to sampling devices. We simulated black bear (Ursus americanus populations and spatial capture-recapture data to evaluate the influence of trap configuration and trap spacing on estimates of population size and a spatial scale parameter, sigma, that relates to home range size. We varied detection probability and home range size, and considered three trap configurations common to large-mammal mark-recapture studies: regular spacing, clustered, and a temporal sequence of different cluster configurations (i.e., trap relocation. We explored trap spacing and number of traps per cluster by varying the number of traps. The clustered arrangement performed well when detection rates were low, and provides for easier field implementation than the sequential trap arrangement. However, performance differences between trap configurations diminished as home range size increased. Our simulations suggest it is important to consider trap spacing relative to home range sizes, with traps ideally spaced no more than twice the spatial scale parameter. While spatial capture-recapture models can accommodate different sampling designs and still estimate parameters with accuracy and precision, our simulations demonstrate that aspects of sampling design, namely trap configuration and spacing, must consider study area size, ranges of individual movement, and home range sizes in the study population.
Trap configuration and spacing influences parameter estimates in spatial capture-recapture models.
Sun, Catherine C; Fuller, Angela K; Royle, J Andrew
2014-01-01
An increasing number of studies employ spatial capture-recapture models to estimate population size, but there has been limited research on how different spatial sampling designs and trap configurations influence parameter estimators. Spatial capture-recapture models provide an advantage over non-spatial models by explicitly accounting for heterogeneous detection probabilities among individuals that arise due to the spatial organization of individuals relative to sampling devices. We simulated black bear (Ursus americanus) populations and spatial capture-recapture data to evaluate the influence of trap configuration and trap spacing on estimates of population size and a spatial scale parameter, sigma, that relates to home range size. We varied detection probability and home range size, and considered three trap configurations common to large-mammal mark-recapture studies: regular spacing, clustered, and a temporal sequence of different cluster configurations (i.e., trap relocation). We explored trap spacing and number of traps per cluster by varying the number of traps. The clustered arrangement performed well when detection rates were low, and provides for easier field implementation than the sequential trap arrangement. However, performance differences between trap configurations diminished as home range size increased. Our simulations suggest it is important to consider trap spacing relative to home range sizes, with traps ideally spaced no more than twice the spatial scale parameter. While spatial capture-recapture models can accommodate different sampling designs and still estimate parameters with accuracy and precision, our simulations demonstrate that aspects of sampling design, namely trap configuration and spacing, must consider study area size, ranges of individual movement, and home range sizes in the study population.
Directory of Open Access Journals (Sweden)
Nelson Peter
2006-11-01
Full Text Available Abstract Aim To estimate the key transmission parameters associated with an outbreak of pandemic influenza in an institutional setting (New Zealand 1918. Methods Historical morbidity and mortality data were obtained from the report of the medical officer for a large military camp. A susceptible-exposed-infectious-recovered epidemiological model was solved numerically to find a range of best-fit estimates for key epidemic parameters and an incidence curve. Mortality data were subsequently modelled by performing a convolution of incidence distribution with a best-fit incidence-mortality lag distribution. Results Basic reproduction number (R0 values for three possible scenarios ranged between 1.3, and 3.1, and corresponding average latent period and infectious period estimates ranged between 0.7 and 1.3 days, and 0.2 and 0.3 days respectively. The mean and median best-estimate incidence-mortality lag periods were 6.9 and 6.6 days respectively. This delay is consistent with secondary bacterial pneumonia being a relatively important cause of death in this predominantly young male population. Conclusion These R0 estimates are broadly consistent with others made for the 1918 influenza pandemic and are not particularly large relative to some other infectious diseases. This finding suggests that if a novel influenza strain of similar virulence emerged then it could potentially be controlled through the prompt use of major public health measures.
DEFF Research Database (Denmark)
Ferrari, A.; Gutierrez, S.; Sin, Gürkan
2016-01-01
A steady state model for a production scale milk drying process was built to help process understanding and optimization studies. It involves a spray chamber and also internal/external fluid beds. The model was subjected to a comprehensive statistical analysis for quality assurance using...... sensitivity analysis of inputs/parameters, and uncertainty analysis to estimate confidence intervals on parameters and model predictions (error propagation). Variance based sensitivity analysis (Sobol's method) was used to quantify the influence of inputs on the final powder moisture as the model output...... at chamber inlet air (variation > 100%). The sensitivity analysis results suggest exploring improvements in the current control (Proportional Integral Derivative) for moisture content at concentrate chamber feed in order to reduce the output variance. It is also confirmed that humidity control at chamber...
Estimation of anisotropy parameters in organic-rich shale: Rock physics forward modeling approach
Energy Technology Data Exchange (ETDEWEB)
Herawati, Ida, E-mail: ida.herawati@students.itb.ac.id; Winardhi, Sonny; Priyono, Awali [Mining and Petroleum Engineering Faculty, Institut Teknologi Bandung, Bandung, 40132 (Indonesia)
2015-09-30
Anisotropy analysis becomes an important step in processing and interpretation of seismic data. One of the most important things in anisotropy analysis is anisotropy parameter estimation which can be estimated using well data, core data or seismic data. In seismic data, anisotropy parameter calculation is generally based on velocity moveout analysis. However, the accuracy depends on data quality, available offset, and velocity moveout picking. Anisotropy estimation using seismic data is needed to obtain wide coverage of particular layer anisotropy. In anisotropic reservoir, analysis of anisotropy parameters also helps us to better understand the reservoir characteristics. Anisotropy parameters, especially ε, are related to rock property and lithology determination. Current research aims to estimate anisotropy parameter from seismic data and integrate well data with case study in potential shale gas reservoir. Due to complexity in organic-rich shale reservoir, extensive study from different disciplines is needed to understand the reservoir. Shale itself has intrinsic anisotropy caused by lamination of their formed minerals. In order to link rock physic with seismic response, it is necessary to build forward modeling in organic-rich shale. This paper focuses on studying relationship between reservoir properties such as clay content, porosity and total organic content with anisotropy. Organic content which defines prospectivity of shale gas can be considered as solid background or solid inclusion or both. From the forward modeling result, it is shown that organic matter presence increases anisotropy in shale. The relationships between total organic content and other seismic properties such as acoustic impedance and Vp/Vs are also presented.
Estimation of anisotropy parameters in organic-rich shale: Rock physics forward modeling approach
International Nuclear Information System (INIS)
Herawati, Ida; Winardhi, Sonny; Priyono, Awali
2015-01-01
Anisotropy analysis becomes an important step in processing and interpretation of seismic data. One of the most important things in anisotropy analysis is anisotropy parameter estimation which can be estimated using well data, core data or seismic data. In seismic data, anisotropy parameter calculation is generally based on velocity moveout analysis. However, the accuracy depends on data quality, available offset, and velocity moveout picking. Anisotropy estimation using seismic data is needed to obtain wide coverage of particular layer anisotropy. In anisotropic reservoir, analysis of anisotropy parameters also helps us to better understand the reservoir characteristics. Anisotropy parameters, especially ε, are related to rock property and lithology determination. Current research aims to estimate anisotropy parameter from seismic data and integrate well data with case study in potential shale gas reservoir. Due to complexity in organic-rich shale reservoir, extensive study from different disciplines is needed to understand the reservoir. Shale itself has intrinsic anisotropy caused by lamination of their formed minerals. In order to link rock physic with seismic response, it is necessary to build forward modeling in organic-rich shale. This paper focuses on studying relationship between reservoir properties such as clay content, porosity and total organic content with anisotropy. Organic content which defines prospectivity of shale gas can be considered as solid background or solid inclusion or both. From the forward modeling result, it is shown that organic matter presence increases anisotropy in shale. The relationships between total organic content and other seismic properties such as acoustic impedance and Vp/Vs are also presented
PESTO: Parameter EStimation TOolbox.
Stapor, Paul; Weindl, Daniel; Ballnus, Benjamin; Hug, Sabine; Loos, Carolin; Fiedler, Anna; Krause, Sabrina; Hroß, Sabrina; Fröhlich, Fabian; Hasenauer, Jan; Wren, Jonathan
2018-02-15
PESTO is a widely applicable and highly customizable toolbox for parameter estimation in MathWorks MATLAB. It offers scalable algorithms for optimization, uncertainty and identifiability analysis, which work in a very generic manner, treating the objective function as a black box. Hence, PESTO can be used for any parameter estimation problem, for which the user can provide a deterministic objective function in MATLAB. PESTO is a MATLAB toolbox, freely available under the BSD license. The source code, along with extensive documentation and example code, can be downloaded from https://github.com/ICB-DCM/PESTO/. jan.hasenauer@helmholtz-muenchen.de. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.
Ben Slama, Amine; Mouelhi, Aymen; Sahli, Hanene; Manoubi, Sondes; Mbarek, Chiraz; Trabelsi, Hedi; Fnaiech, Farhat; Sayadi, Mounir
2017-07-01
The diagnostic of the vestibular neuritis (VN) presents many difficulties to traditional assessment methods This paper deals with a fully automatic VN diagnostic system based on nystagmus parameter estimation using a pupil detection algorithm. A geodesic active contour model is implemented to find an accurate segmentation region of the pupil. Hence, the novelty of the proposed algorithm is to speed up the standard segmentation by using a specific mask located on the region of interest. This allows a drastically computing time reduction and a great performance and accuracy of the obtained results. After using this fast segmentation algorithm, the obtained estimated parameters are represented in temporal and frequency settings. A useful principal component analysis (PCA) selection procedure is then applied to obtain a reduced number of estimated parameters which are used to train a multi neural network (MNN). Experimental results on 90 eye movement videos show the effectiveness and the accuracy of the proposed estimation algorithm versus previous work. Copyright © 2017 Elsevier B.V. All rights reserved.
The Hill model of concentration-response is ubiquitous in toxicology, perhaps because its parameters directly relate to biologically significant metrics of toxicity such as efficacy and potency. Point estimates of these parameters obtained through least squares regression or maxi...
Directory of Open Access Journals (Sweden)
Michala Jakubcová
2015-01-01
Full Text Available The presented paper provides the analysis of selected versions of the particle swarm optimization (PSO algorithm. The tested versions of the PSO were combined with the shuffling mechanism, which splits the model population into complexes and performs distributed PSO optimization. One of them is a new proposed PSO modification, APartW, which enhances the global exploration and local exploitation in the parametric space during the optimization process through the new updating mechanism applied on the PSO inertia weight. The performances of four selected PSO methods were tested on 11 benchmark optimization problems, which were prepared for the special session on single-objective real-parameter optimization CEC 2005. The results confirm that the tested new APartW PSO variant is comparable with other existing distributed PSO versions, AdaptW and LinTimeVarW. The distributed PSO versions were developed for finding the solution of inverse problems related to the estimation of parameters of hydrological model Bilan. The results of the case study, made on the selected set of 30 catchments obtained from MOPEX database, show that tested distributed PSO versions provide suitable estimates of Bilan model parameters and thus can be used for solving related inverse problems during the calibration process of studied water balance hydrological model.
Directory of Open Access Journals (Sweden)
Aijia Ouyang
2015-01-01
Full Text Available Nonlinear Muskingum models are important tools in hydrological forecasting. In this paper, we have come up with a class of new discretization schemes including a parameter θ to approximate the nonlinear Muskingum model based on general trapezoid formulas. The accuracy of these schemes is second order, if θ≠1/3, but interestingly when θ=1/3, the accuracy of the presented scheme gets improved to third order. Then, the present schemes are transformed into an unconstrained optimization problem which can be solved by a hybrid invasive weed optimization (HIWO algorithm. Finally, a numerical example is provided to illustrate the effectiveness of the present methods. The numerical results substantiate the fact that the presented methods have better precision in estimating the parameters of nonlinear Muskingum models.
Lee, T. S.; Yoon, S.; Jeong, C.
2012-12-01
The primary purpose of frequency analysis in hydrology is to estimate the magnitude of an event with a given frequency of occurrence. The precision of frequency analysis depends on the selection of an appropriate probability distribution model (PDM) and parameter estimation techniques. A number of PDMs have been developed to describe the probability distribution of the hydrological variables. For each of the developed PDMs, estimated parameters are provided based on alternative estimation techniques, such as the method of moments (MOM), probability weighted moments (PWM), linear function of ranked observations (L-moments), and maximum likelihood (ML). Generally, the results using ML are more reliable than the other methods. However, the ML technique is more laborious than the other methods because an iterative numerical solution, such as the Newton-Raphson method, must be used for the parameter estimation of PDMs. In the meantime, meta-heuristic approaches have been developed to solve various engineering optimization problems (e.g., linear and stochastic, dynamic, nonlinear). These approaches include genetic algorithms, ant colony optimization, simulated annealing, tabu searches, and evolutionary computation methods. Meta-heuristic approaches use a stochastic random search instead of a gradient search so that intricate derivative information is unnecessary. Therefore, the meta-heuristic approaches have been shown to be a useful strategy to solve optimization problems in hydrology. A number of studies focus on using meta-heuristic approaches for estimation of hydrological variables with parameter estimation of PDMs. Applied meta-heuristic approaches offer reliable solutions but use more computation time than derivative-based methods. Therefore, the purpose of this study is to enhance the meta-heuristic approach for the parameter estimation of PDMs by using a recently developed algorithm known as a harmony search (HS). The performance of the HS is compared to the
Directory of Open Access Journals (Sweden)
Y. Miyazawa
2013-04-01
Full Text Available With combined use of the ocean–atmosphere simulation models and field observation data, we evaluate the parameters associated with the total caesium-137 amounts of the direct release into the ocean and atmospheric deposition over the western North Pacific caused by the accident of Fukushima Daiichi nuclear power plant (FNPP that occurred in March 2011. The Green's function approach is adopted for the estimation of two parameters determining the total emission amounts for the period from 12 March to 6 May 2011. It is confirmed that the validity of the estimation depends on the simulation skill near FNPP. The total amount of the direct release is estimated as 5.5–5.9 × 1015 Bq, while that of the atmospheric deposition is estimated as 5.5–9.7 × 1015 Bq, which indicates broader range of the estimate than that of the direct release owing to uncertainty of the dispersion widely spread over the western North Pacific.
Parameter estimation of a pulp digester model with derivative-free optimization strategies
Seiça, João C.; Romanenko, Andrey; Fernandes, Florbela P.; Santos, Lino O.; Fernandes, Natércia C. P.
2017-07-01
The work concerns the parameter estimation in the context of the mechanistic modelling of a pulp digester. The problem is cast as a box bounded nonlinear global optimization problem in order to minimize the mismatch between the model outputs with the experimental data observed at a real pulp and paper plant. MCSFilter and Simulated Annealing global optimization methods were used to solve the optimization problem. While the former took longer to converge to the global minimum, the latter terminated faster at a significantly higher value of the objective function and, thus, failed to find the global solution.
Silva, F. E. O. E.; Naghettini, M. D. C.; Fernandes, W.
2014-12-01
This paper evaluated the uncertainties associated with the estimation of the parameters of a conceptual rainfall-runoff model, through the use of Bayesian inference techniques by Monte Carlo simulation. The Pará River sub-basin, located in the upper São Francisco river basin, in southeastern Brazil, was selected for developing the studies. In this paper, we used the Rio Grande conceptual hydrologic model (EHR/UFMG, 2001) and the Markov Chain Monte Carlo simulation method named DREAM (VRUGT, 2008a). Two probabilistic models for the residues were analyzed: (i) the classic [Normal likelihood - r ≈ N (0, σ²)]; and (ii) a generalized likelihood (SCHOUPS & VRUGT, 2010), in which it is assumed that the differences between observed and simulated flows are correlated, non-stationary, and distributed as a Skew Exponential Power density. The assumptions made for both models were checked to ensure that the estimation of uncertainties in the parameters was not biased. The results showed that the Bayesian approach proved to be adequate to the proposed objectives, enabling and reinforcing the importance of assessing the uncertainties associated with hydrological modeling.
Data2Dynamics: a modeling environment tailored to parameter estimation in dynamical systems.
Raue, A; Steiert, B; Schelker, M; Kreutz, C; Maiwald, T; Hass, H; Vanlier, J; Tönsing, C; Adlung, L; Engesser, R; Mader, W; Heinemann, T; Hasenauer, J; Schilling, M; Höfer, T; Klipp, E; Theis, F; Klingmüller, U; Schöberl, B; Timmer, J
2015-11-01
Modeling of dynamical systems using ordinary differential equations is a popular approach in the field of systems biology. Two of the most critical steps in this approach are to construct dynamical models of biochemical reaction networks for large datasets and complex experimental conditions and to perform efficient and reliable parameter estimation for model fitting. We present a modeling environment for MATLAB that pioneers these challenges. The numerically expensive parts of the calculations such as the solving of the differential equations and of the associated sensitivity system are parallelized and automatically compiled into efficient C code. A variety of parameter estimation algorithms as well as frequentist and Bayesian methods for uncertainty analysis have been implemented and used on a range of applications that lead to publications. The Data2Dynamics modeling environment is MATLAB based, open source and freely available at http://www.data2dynamics.org. andreas.raue@fdm.uni-freiburg.de Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
HMM filtering and parameter estimation of an electricity spot price model
International Nuclear Information System (INIS)
Erlwein, Christina; Benth, Fred Espen; Mamon, Rogemar
2010-01-01
In this paper we develop a model for electricity spot price dynamics. The spot price is assumed to follow an exponential Ornstein-Uhlenbeck (OU) process with an added compound Poisson process. In this way, the model allows for mean-reversion and possible jumps. All parameters are modulated by a hidden Markov chain in discrete time. They are able to switch between different economic regimes representing the interaction of various factors. Through the application of reference probability technique, adaptive filters are derived, which in turn, provide optimal estimates for the state of the Markov chain and related quantities of the observation process. The EM algorithm is applied to find optimal estimates of the model parameters in terms of the recursive filters. We implement this self-calibrating model on a deseasonalised series of daily spot electricity prices from the Nordic exchange Nord Pool. On the basis of one-step ahead forecasts, we found that the model is able to capture the empirical characteristics of Nord Pool spot prices. (author)
α-Decomposition for estimating parameters in common cause failure modeling based on causal inference
International Nuclear Information System (INIS)
Zheng, Xiaoyu; Yamaguchi, Akira; Takata, Takashi
2013-01-01
The traditional α-factor model has focused on the occurrence frequencies of common cause failure (CCF) events. Global α-factors in the α-factor model are defined as fractions of failure probability for particular groups of components. However, there are unknown uncertainties in the CCF parameters estimation for the scarcity of available failure data. Joint distributions of CCF parameters are actually determined by a set of possible causes, which are characterized by CCF-triggering abilities and occurrence frequencies. In the present paper, the process of α-decomposition (Kelly-CCF method) is developed to learn about sources of uncertainty in CCF parameter estimation. Moreover, it aims to evaluate CCF risk significances of different causes, which are named as decomposed α-factors. Firstly, a Hybrid Bayesian Network is adopted to reveal the relationship between potential causes and failures. Secondly, because all potential causes have different occurrence frequencies and abilities to trigger dependent failures or independent failures, a regression model is provided and proved by conditional probability. Global α-factors are expressed by explanatory variables (causes’ occurrence frequencies) and parameters (decomposed α-factors). At last, an example is provided to illustrate the process of hierarchical Bayesian inference for the α-decomposition process. This study shows that the α-decomposition method can integrate failure information from cause, component and system level. It can parameterize the CCF risk significance of possible causes and can update probability distributions of global α-factors. Besides, it can provide a reliable way to evaluate uncertainty sources and reduce the uncertainty in probabilistic risk assessment. It is recommended to build databases including CCF parameters and corresponding causes’ occurrence frequency of each targeted system
Aswath Damodaran
1999-01-01
Over the last three decades, the capital asset pricing model has occupied a central and often controversial place in most corporate finance analysts’ tool chests. The model requires three inputs to compute expected returns – a riskfree rate, a beta for an asset and an expected risk premium for the market portfolio (over and above the riskfree rate). Betas are estimated, by most practitioners, by regressing returns on an asset against a stock index, with the slope of the regression being the b...
Wahid, Abdul; Khan, Dost Muhammad; Hussain, Ijaz
2017-01-01
High dimensional data are commonly encountered in various scientific fields and pose great challenges to modern statistical analysis. To address this issue different penalized regression procedures have been introduced in the litrature, but these methods cannot cope with the problem of outliers and leverage points in the heavy tailed high dimensional data. For this purppose, a new Robust Adaptive Lasso (RAL) method is proposed which is based on pearson residuals weighting scheme. The weight function determines the compatibility of each observations and downweight it if they are inconsistent with the assumed model. It is observed that RAL estimator can correctly select the covariates with non-zero coefficients and can estimate parameters, simultaneously, not only in the presence of influential observations, but also in the presence of high multicolliearity. We also discuss the model selection oracle property and the asymptotic normality of the RAL. Simulations findings and real data examples also demonstrate the better performance of the proposed penalized regression approach.
Caruso, Geoffrey; Cavailhès, Jean; Peeters, Dominique; Thomas, Isabelle; Frankhauser, Pierre; Vuidel, Gilles
2015-12-01
This paper describes a dataset of 6284 land transactions prices and plot surfaces in 3 medium-sized cities in France (Besançon, Dijon and Brest). The dataset includes road accessibility as obtained from a minimization algorithm, and the amount of green space available to households in the neighborhood of the transactions, as evaluated from a land cover dataset. Further to the data presentation, the paper describes how these variables can be used to estimate the non-observable parameters of a residential choice function explicitly derived from a microeconomic model. The estimates are used by Caruso et al. (2015) to run a calibrated microeconomic urban growth simulation model where households are assumed to trade-off accessibility and local green space amenities.
Caruso, Geoffrey; Cavailhès, Jean; Peeters, Dominique; Thomas, Isabelle; Frankhauser, Pierre; Vuidel, Gilles
2015-01-01
This paper describes a dataset of 6284 land transactions prices and plot surfaces in 3 medium-sized cities in France (Besançon, Dijon and Brest). The dataset includes road accessibility as obtained from a minimization algorithm, and the amount of green space available to households in the neighborhood of the transactions, as evaluated from a land cover dataset. Further to the data presentation, the paper describes how these variables can be used to estimate the non-observable parameters of a residential choice function explicitly derived from a microeconomic model. The estimates are used by Caruso et al. (2015) to run a calibrated microeconomic urban growth simulation model where households are assumed to trade-off accessibility and local green space amenities. PMID:26958606
International Nuclear Information System (INIS)
Shultis, J.K.; Buranapan, W.; Eckhoff, N.D.
1981-12-01
Of considerable importance in the safety analysis of nuclear power plants are methods to estimate the probability of failure-on-demand, p, of a plant component that normally is inactive and that may fail when activated or stressed. Properties of five methods for estimating from failure-on-demand data the parameters of the beta prior distribution in a compound beta-binomial probability model are examined. Simulated failure data generated from a known beta-binomial marginal distribution are used to estimate values of the beta parameters by (1) matching moments of the prior distribution to those of the data, (2) the maximum likelihood method based on the prior distribution, (3) a weighted marginal matching moments method, (4) an unweighted marginal matching moments method, and (5) the maximum likelihood method based on the marginal distribution. For small sample sizes (N = or < 10) with data typical of low failure probability components, it was found that the simple prior matching moments method is often superior (e.g. smallest bias and mean squared error) while for larger sample sizes the marginal maximum likelihood estimators appear to be best
Thai, Hoai-Thu; Mentré, France; Holford, Nicholas H G; Veyrat-Follet, Christine; Comets, Emmanuelle
2013-01-01
A version of the nonparametric bootstrap, which resamples the entire subjects from original data, called the case bootstrap, has been increasingly used for estimating uncertainty of parameters in mixed-effects models. It is usually applied to obtain more robust estimates of the parameters and more realistic confidence intervals (CIs). Alternative bootstrap methods, such as residual bootstrap and parametric bootstrap that resample both random effects and residuals, have been proposed to better take into account the hierarchical structure of multi-level and longitudinal data. However, few studies have been performed to compare these different approaches. In this study, we used simulation to evaluate bootstrap methods proposed for linear mixed-effect models. We also compared the results obtained by maximum likelihood (ML) and restricted maximum likelihood (REML). Our simulation studies evidenced the good performance of the case bootstrap as well as the bootstraps of both random effects and residuals. On the other hand, the bootstrap methods that resample only the residuals and the bootstraps combining case and residuals performed poorly. REML and ML provided similar bootstrap estimates of uncertainty, but there was slightly more bias and poorer coverage rate for variance parameters with ML in the sparse design. We applied the proposed methods to a real dataset from a study investigating the natural evolution of Parkinson's disease and were able to confirm that the methods provide plausible estimates of uncertainty. Given that most real-life datasets tend to exhibit heterogeneity in sampling schedules, the residual bootstraps would be expected to perform better than the case bootstrap. Copyright © 2013 John Wiley & Sons, Ltd.
Directory of Open Access Journals (Sweden)
W. Castaings
2009-04-01
Full Text Available Variational methods are widely used for the analysis and control of computationally intensive spatially distributed systems. In particular, the adjoint state method enables a very efficient calculation of the derivatives of an objective function (response function to be analysed or cost function to be optimised with respect to model inputs.
In this contribution, it is shown that the potential of variational methods for distributed catchment scale hydrology should be considered. A distributed flash flood model, coupling kinematic wave overland flow and Green Ampt infiltration, is applied to a small catchment of the Thoré basin and used as a relatively simple (synthetic observations but didactic application case.
It is shown that forward and adjoint sensitivity analysis provide a local but extensive insight on the relation between the assigned model parameters and the simulated hydrological response. Spatially distributed parameter sensitivities can be obtained for a very modest calculation effort (~6 times the computing time of a single model run and the singular value decomposition (SVD of the Jacobian matrix provides an interesting perspective for the analysis of the rainfall-runoff relation.
For the estimation of model parameters, adjoint-based derivatives were found exceedingly efficient in driving a bound-constrained quasi-Newton algorithm. The reference parameter set is retrieved independently from the optimization initial condition when the very common dimension reduction strategy (i.e. scalar multipliers is adopted.
Furthermore, the sensitivity analysis results suggest that most of the variability in this high-dimensional parameter space can be captured with a few orthogonal directions. A parametrization based on the SVD leading singular vectors was found very promising but should be combined with another regularization strategy in order to prevent overfitting.
Estimations of parameters in Pareto reliability model in the presence of masked data
International Nuclear Information System (INIS)
Sarhan, Ammar M.
2003-01-01
Estimations of parameters included in the individual distributions of the life times of system components in a series system are considered in this paper based on masked system life test data. We consider a series system of two independent components each has a Pareto distributed lifetime. The maximum likelihood and Bayes estimators for the parameters and the values of the reliability of the system's components at a specific time are obtained. Symmetrical triangular prior distributions are assumed for the unknown parameters to be estimated in obtaining the Bayes estimators of these parameters. Large simulation studies are done in order: (i) explain how one can utilize the theoretical results obtained; (ii) compare the maximum likelihood and Bayes estimates obtained of the underlying parameters; and (iii) study the influence of the masking level and the sample size on the accuracy of the estimates obtained
Liu, Y.; Pau, G. S. H.; Finsterle, S.
2015-12-01
Parameter inversion involves inferring the model parameter values based on sparse observations of some observables. To infer the posterior probability distributions of the parameters, Markov chain Monte Carlo (MCMC) methods are typically used. However, the large number of forward simulations needed and limited computational resources limit the complexity of the hydrological model we can use in these methods. In view of this, we studied the implicit sampling (IS) method, an efficient importance sampling technique that generates samples in the high-probability region of the posterior distribution and thus reduces the number of forward simulations that we need to run. For a pilot-point inversion of a heterogeneous permeability field based on a synthetic ponded infiltration experiment simulated with TOUGH2 (a subsurface modeling code), we showed that IS with linear map provides an accurate Bayesian description of the parameterized permeability field at the pilot points with just approximately 500 forward simulations. We further studied the use of surrogate models to improve the computational efficiency of parameter inversion. We implemented two reduced-order models (ROMs) for the TOUGH2 forward model. One is based on polynomial chaos expansion (PCE), of which the coefficients are obtained using the sparse Bayesian learning technique to mitigate the "curse of dimensionality" of the PCE terms. The other model is Gaussian process regression (GPR) for which different covariance, likelihood and inference models are considered. Preliminary results indicate that ROMs constructed based on the prior parameter space perform poorly. It is thus impractical to replace this hydrological model by a ROM directly in a MCMC method. However, the IS method can work with a ROM constructed for parameters in the close vicinity of the maximum a posteriori probability (MAP) estimate. We will discuss the accuracy and computational efficiency of using ROMs in the implicit sampling procedure
Predictive models for estimating visceral fat: The contribution from anthropometric parameters.
Pinho, Claudia Porto Sabino; Diniz, Alcides da Silva; de Arruda, Ilma Kruze Grande; Leite, Ana Paula Dornelas Leão; Petribú, Marina de Moraes Vasconcelos; Rodrigues, Isa Galvão
2017-01-01
Excessive adipose visceral tissue (AVT) represents an independent risk factor for cardiometabolic alterations. The search continues for a highly valid marker for estimating visceral adiposity that is a simple and low cost tool able to screen individuals who are highly at risk of being viscerally obese. The aim of this study was to develop a predictive model for estimating AVT volume using anthropometric parameters. Excessive adipose visceral tissue (AVT) represents an independent risk factor for cardiometabolic alterations. The search continues for a highly valid marker for estimating visceral adiposity that is a simple and low cost tool able to screen individuals who are highly at risk of being viscerally obese. The aim of this study was to develop a predictive model for estimating AVT volume using anthropometric parameters. A cross-sectional study involving overweight individuals whose AVT was evaluated (using computed tomography-CT), along with the following anthropometric parameters: body mass index (BMI), abdominal circumference (AC), waist-to-hip ratio (WHpR), waist-to-height ratio (WHtR), sagittal diameter (SD), conicity index (CI), neck circumference (NC), neck-to-thigh ratio (NTR), waist-to-thigh ratio (WTR), and body adiposity index (BAI). 109 individuals with an average age of 50.3±12.2 were evaluated. The predictive equation developed to estimate AVT in men was AVT = -1647.75 +2.43(AC) +594.74(WHpR) +883.40(CI) (R2 adjusted: 64.1%). For women, the model chosen was: AVT = -634.73 +1.49(Age) +8.34(SD) + 291.51(CI) + 6.92(NC) (R2 adjusted: 40.4%). The predictive ability of the equations developed in relation to AVT volume determined by CT was 66.9% and 46.2% for males and females, respectively (p<0.001). A quick and precise AVT estimate, especially for men, can be obtained using only AC, WHpR, and CI for men, and age, SD, CI, and NC for women. These equations can be used as a clinical and epidemiological tool for overweight individuals.
Modeling and Parameter Estimation of Spacecraft Fuel Slosh with Diaphragms Using Pendulum Analogs
Chatman, Yadira; Gangadharan, Sathya; Schlee, Keith; Ristow, James; Suderman, James; Walker, Charles; Hubert, Carl
2007-01-01
Prediction and control of liquid slosh in moving containers is an important consideration in the design of spacecraft and launch vehicle control systems. Even with modern computing systems, CFD type simulations are not fast enough to allow for large scale Monte Carlo analyses of spacecraft and launch vehicle dynamic behavior with slosh included. It is still desirable to use some type of simplified mechanical analog for the slosh to shorten computation time. Analytic determination of the slosh analog parameters has met with mixed success and is made even more difficult by the introduction of propellant management devices such as elastomeric diaphragms. By subjecting full-sized fuel tanks with actual flight fuel loads to motion similar to that experienced in flight and measuring the forces experienced by the tanks, these parameters can be determined experimentally. Currently, the identification of the model parameters is a laborious trial-and-error process in which the hand-derived equations of motion for the mechanical analog are evaluated and their results compared with the experimental results. This paper will describe efforts by the university component of a team comprised of NASA's Launch Services Program, Embry Riddle Aeronautical University, Southwest Research Institute and Hubert Astronautics to improve the accuracy and efficiency of modeling techniques used to predict these types of motions. Of particular interest is the effect of diaphragms and bladders on the slosh dynamics and how best to model these devices. The previous research was an effort to automate the process of slosh model parameter identification using a MATLAB/SimMechanics-based computer simulation. These results are the first step in applying the same computer estimation to a full-size tank and vehicle propulsion system. The introduction of diaphragms to this experimental set-up will aid in a better and more complete prediction of fuel slosh characteristics and behavior. Automating the
Parameter estimation in plasmonic QED
Jahromi, H. Rangani
2018-03-01
We address the problem of parameter estimation in the presence of plasmonic modes manipulating emitted light via the localized surface plasmons in a plasmonic waveguide at the nanoscale. The emitter that we discuss is the nitrogen vacancy centre (NVC) in diamond modelled as a qubit. Our goal is to estimate the β factor measuring the fraction of emitted energy captured by waveguide surface plasmons. The best strategy to obtain the most accurate estimation of the parameter, in terms of the initial state of the probes and different control parameters, is investigated. In particular, for two-qubit estimation, it is found although we may achieve the best estimation at initial instants by using the maximally entangled initial states, at long times, the optimal estimation occurs when the initial state of the probes is a product one. We also find that decreasing the interqubit distance or increasing the propagation length of the plasmons improve the precision of the estimation. Moreover, decrease of spontaneous emission rate of the NVCs retards the quantum Fisher information (QFI) reduction and therefore the vanishing of the QFI, measuring the precision of the estimation, is delayed. In addition, if the phase parameter of the initial state of the two NVCs is equal to πrad, the best estimation with the two-qubit system is achieved when initially the NVCs are maximally entangled. Besides, the one-qubit estimation has been also analysed in detail. Especially, we show that, using a two-qubit probe, at any arbitrary time, enhances considerably the precision of estimation in comparison with one-qubit estimation.
Channel Parameter Estimation for Scatter Cluster Model Using Modified MUSIC Algorithm
Directory of Open Access Journals (Sweden)
Jinsheng Yang
2012-01-01
Full Text Available Recently, the scatter cluster models which precisely evaluate the performance of the wireless communication system have been proposed in the literature. However, the conventional SAGE algorithm does not work for these scatter cluster-based models because it performs poorly when the transmit signals are highly correlated. In this paper, we estimate the time of arrival (TOA, the direction of arrival (DOA, and Doppler frequency for scatter cluster model by the modified multiple signal classification (MUSIC algorithm. Using the space-time characteristics of the multiray channel, the proposed algorithm combines the temporal filtering techniques and the spatial smoothing techniques to isolate and estimate the incoming rays. The simulation results indicated that the proposed algorithm has lower complexity and is less time-consuming in the dense multipath environment than SAGE algorithm. Furthermore, the estimations’ performance increases with elements of receive array and samples length. Thus, the problem of the channel parameter estimation of the scatter cluster model can be effectively addressed with the proposed modified MUSIC algorithm.
Parameter estimation in a simple stochastic differential equation for phytoplankton modelling
DEFF Research Database (Denmark)
Møller, Jan Kloppenborg; Madsen, Henrik; Carstensen, Jacob
2011-01-01
The use of stochastic differential equations (SDEs) for simulation of aquatic ecosystems has attracted increasing attention in recent years. The SDE setting also provides the opportunity for statistical estimation of ecosystem parameters. We present an estimation procedure, based on Kalman...... filtering and likelihood estimation, which has proven useful in other fields of application. The estimation procedure is presented and the development from ordinary differential equations (ODEs) to SDEs is discussed with emphasis on autocorrelated residuals, commonly encountered with ODEs. The estimation...
Nandan, Shyam; Ouillon, Guy; Wiemer, Stefan; Sornette, Didier
2017-07-01
The Epidemic Type Aftershock Sequence (ETAS) model is widely employed to model the spatiotemporal distribution of earthquakes, generally using spatially invariant parameters. We propose an efficient method for the estimation of spatially varying parameters, using the expectation maximization (EM) algorithm and spatial Voronoi tessellation ensembles. We use the Bayesian information criterion (BIC) to rank inverted models given their likelihood and complexity and select the best models to finally compute an ensemble model at any location. Using a synthetic catalog, we also check that the proposed method correctly inverts the known parameters. We apply the proposed method to earthquakes included in the Advanced National Seismic System catalog that occurred within the time period 1981-2015 in a spatial polygon around California. The results indicate significant spatial variation of the ETAS parameters. We find that the efficiency of earthquakes to trigger future ones (quantified by the branching ratio) positively correlates with surface heat flow. In contrast, the rate of earthquakes triggered by far-field tectonic loading or background seismicity rate shows no such correlation, suggesting the relevance of triggering possibly through fluid-induced activation. Furthermore, the branching ratio and background seismicity rate are found to be uncorrelated with hypocentral depths, indicating that the seismic coupling remains invariant of hypocentral depths in the study region. Additionally, triggering seems to be mostly dominated by small earthquakes. Consequently, the static stress change studies should not only focus on the Coulomb stress changes caused by specific moderate to large earthquakes but also account for the secondary static stress changes caused by smaller earthquakes.
Estimation of temporal gait parameters using Bayesian models on acceleration signals.
López-Nava, I H; Muñoz-Meléndez, A; Pérez Sanpablo, A I; Alessi Montero, A; Quiñones Urióstegui, I; Núñez Carrera, L
2016-01-01
The purpose of this study is to develop a system capable of performing calculation of temporal gait parameters using two low-cost wireless accelerometers and artificial intelligence-based techniques as part of a larger research project for conducting human gait analysis. Ten healthy subjects of different ages participated in this study and performed controlled walking tests. Two wireless accelerometers were placed on their ankles. Raw acceleration signals were processed in order to obtain gait patterns from characteristic peaks related to steps. A Bayesian model was implemented to classify the characteristic peaks into steps or nonsteps. The acceleration signals were segmented based on gait events, such as heel strike and toe-off, of actual steps. Temporal gait parameters, such as cadence, ambulation time, step time, gait cycle time, stance and swing phase time, simple and double support time, were estimated from segmented acceleration signals. Gait data-sets were divided into two groups of ages to test Bayesian models in order to classify the characteristic peaks. The mean error obtained from calculating the temporal gait parameters was 4.6%. Bayesian models are useful techniques that can be applied to classification of gait data of subjects at different ages with promising results.
Testing variational estimation of process parameters and initial conditions of an earth system model
Directory of Open Access Journals (Sweden)
Simon Blessing
2014-03-01
Full Text Available We present a variational assimilation system around a coarse resolution Earth System Model (ESM and apply it for estimating initial conditions and parameters of the model. The system is based on derivative information that is efficiently provided by the ESM's adjoint, which has been generated through automatic differentiation of the model's source code. In our variational approach, the length of the feasible assimilation window is limited by the size of the domain in control space over which the approximation by the derivative is valid. This validity domain is reduced by non-smooth process representations. We show that in this respect the ocean component is less critical than the atmospheric component. We demonstrate how the feasible assimilation window can be extended to several weeks by modifying the implementation of specific process representations and by switching off processes such as precipitation.
PARAMETER ESTIMATION IN NON-HOMOGENEOUS BOOLEAN MODELS: AN APPLICATION TO PLANT DEFENSE RESPONSE
Directory of Open Access Journals (Sweden)
Maria Angeles Gallego
2014-11-01
Full Text Available Many medical and biological problems require to extract information from microscopical images. Boolean models have been extensively used to analyze binary images of random clumps in many scientific fields. In this paper, a particular type of Boolean model with an underlying non-stationary point process is considered. The intensity of the underlying point process is formulated as a fixed function of the distance to a region of interest. A method to estimate the parameters of this Boolean model is introduced, and its performance is checked in two different settings. Firstly, a comparative study with other existent methods is done using simulated data. Secondly, the method is applied to analyze the longleaf data set, which is a very popular data set in the context of point processes included in the R package spatstat. Obtained results show that the new method provides as accurate estimates as those obtained with more complex methods developed for the general case. Finally, to illustrate the application of this model and this method, a particular type of phytopathological images are analyzed. These images show callose depositions in leaves of Arabidopsis plants. The analysis of callose depositions, is very popular in the phytopathological literature to quantify activity of plant immunity.
Experimental parameter estimation of a visuo-vestibular interaction model in humans.
Laurens, Jean; Valko, Yulia; Straumann, Dominik
2011-01-01
Visuo-vestibular interactions in monkeys can be accurately modelled using the classical Raphan and Cohen's model. This model is composed of direct vestibular and visual contributions to the vestibulo-ocular reflex (VOR) and of a velocity storage. We applied this model to humans and estimated its parameters in a series of experiments: yaw rotations at moderate (60°/s) and high velocities (240°/s), suppression of the VOR by a head-fixed wide-field visual stimulus, and optokinetic stimulation with measurements of optokinetic nystagmus (OKN) and optokinetic afternystagmus (OKAN). We found the velocity storage time constant to be 13 s, which decreased to 8 s during visual suppression. OKAN initial velocity was 12% of the OKN stimulus velocity. The gain of the direct visual pathway was 0.75 during both visual suppression and OKN; however, the visual input to the velocity storage was higher during visual suppression than during OKN. We could not estimate the time constant of the semicircular canals accurately. Finally, we inferred from high-velocity rotations that the velocity storage saturates around 20-30°/s. Our results indicate that the dynamics of visuo-vestibular interactions in humans is similar as in monkeys. The central integration of visual cues, however, is weaker in humans.
On Drift Parameter Estimation in Models with Fractional Brownian Motion by Discrete Observations
Directory of Open Access Journals (Sweden)
Yuliya Mishura
2014-06-01
Full Text Available We study a problem of an unknown drift parameter estimation in a stochastic differen- tial equation driven by fractional Brownian motion. We represent the likelihood ratio as a function of the observable process. The form of this representation is in general rather complicated. However, in the simplest case it can be simplified and we can discretize it to establish the a. s. convergence of the discretized version of maximum likelihood estimator to the true value of parameter. We also investigate a non-standard estimator of the drift parameter showing further its strong consistency.
Marcoulides, Katerina M.
2018-01-01
This study examined the use of Bayesian analysis methods for the estimation of item parameters in a two-parameter logistic item response theory model. Using simulated data under various design conditions with both informative and non-informative priors, the parameter recovery of Bayesian analysis methods were examined. Overall results showed that…
Hydrologic Modeling and Parameter Estimation under Data Scarcity for Java Island, Indonesia
Yanto, M.; Livneh, B.; Rajagopalan, B.; Kasprzyk, J. R.
2015-12-01
The Indonesian island of Java is routinely subjected to intense flooding, drought and related natural hazards, resulting in severe social and economic impacts. Although an improved understanding of the island's hydrology would help mitigate these risks, data scarcity issues make the modeling challenging. To this end, we developed a hydrological representation of Java using the Variable Infiltration Capacity (VIC) model, to simulate the hydrologic processes of several watersheds across the island. We measured the model performance using Nash-Sutcliffe Efficiency (NSE) at monthly time step. Data scarcity and quality issues for precipitation and streamflow warranted the application of a quality control procedure to data ensure consistency among watersheds resulting in 7 watersheds. To optimize the model performance, the calibration parameters were estimated using Borg Multi Objective Evolutionary Algorithm (Borg MOEA), which offers efficient searching of the parameter space, adaptive population sizing and local optima escape facility. The result shows that calibration performance is best (NSE ~ 0.6 - 0.9) in the eastern part of the domain and moderate (NSE ~ 0.3 - 0.5) in the western part of the island. The validation results are lower (NSE ~ 0.1 - 0.5) and (NSE ~ 0.1 - 0.4) in the east and west, respectively. We surmise that the presence of outliers and stark differences in the climate between calibration and validation periods in the western watersheds are responsible for low NSE in this region. In addition, we found that approximately 70% of total errors were contributed by less than 20% of total data. The spatial variability of model performance suggests the influence of both topographical and hydroclimatic controls on the hydrological processes. Most watersheds in eastern part perform better in wet season and vice versa for the western part. This modeling framework is one of the first attempts at comprehensively simulating the hydrology in this maritime, tropical
Directory of Open Access Journals (Sweden)
Shifei Yuan
2015-07-01
Full Text Available Accurate estimation of model parameters and state of charge (SoC is crucial for the lithium-ion battery management system (BMS. In this paper, the stability of the model parameters and SoC estimation under measurement uncertainty is evaluated by three different factors: (i sampling periods of 1/0.5/0.1 s; (ii current sensor precisions of ±5/±50/±500 mA; and (iii voltage sensor precisions of ±1/±2.5/±5 mV. Firstly, the numerical model stability analysis and parametric sensitivity analysis for battery model parameters are conducted under sampling frequency of 1–50 Hz. The perturbation analysis is theoretically performed of current/voltage measurement uncertainty on model parameter variation. Secondly, the impact of three different factors on the model parameters and SoC estimation was evaluated with the federal urban driving sequence (FUDS profile. The bias correction recursive least square (CRLS and adaptive extended Kalman filter (AEKF algorithm were adopted to estimate the model parameters and SoC jointly. Finally, the simulation results were compared and some insightful findings were concluded. For the given battery model and parameter estimation algorithm, the sampling period, and current/voltage sampling accuracy presented a non-negligible effect on the estimation results of model parameters. This research revealed the influence of the measurement uncertainty on the model parameter estimation, which will provide the guidelines to select a reasonable sampling period and the current/voltage sensor sampling precisions in engineering applications.
Xu, D.; Agee, E.; Wang, J.; Ivanov, V. Y.
2017-12-01
The increased frequency and severity of droughts in the Amazon region have emphasized the potential vulnerability of the rainforests to heat and drought-induced stresses, highlighting the need to reduce the uncertainty in estimates of regional evapotranspiration (ET) and quantify resilience of the forest. Ground-based observations for estimating ET are resource intensive, making methods based on remotely sensed observations an attractive alternative. Several methodologies have been developed to estimate ET from satellite data, but challenges remained in model parameterization and satellite limited coverage reducing their utility for monitoring biodiverse regions. In this work, we apply a novel surface energy partition method (Maximum Entropy Production; MEP) based on Bayesian probability theory and nonequilibrium thermodynamics to derive ET time series using satellite data for Amazon basin. For a large, sparsely monitored region such as the Amazon, this approach has the advantage methods of only using single level measurements of net radiation, temperature, and specific humidity data. Furthermore, it is not sensitive to the uncertainty of the input data and model parameters. In this first application of MEP theory for a tropical forest biome, we assess its performance at various spatiotemporal scales against a diverse field data sets. Specifically, the objective of this work is to test this method using eddy flux data for several locations across the Amazonia at sub-daily, monthly, and annual scales and compare the new estimates with those using traditional methods. Analyses of the derived ET time series will contribute to reducing the current knowledge gap surrounding the much debated response of the Amazon Basin region to droughts and offer a template for monitoring the long-term changes in global hydrologic cycle due to anthropogenic and natural causes.
A Note on Parameter Estimation for Lazarsfeld's Latent Class Model Using the EM Algorithm.
Everitt, B. S.
1984-01-01
Latent class analysis is formulated as a problem of estimating parameters in a finite mixture distribution. The EM algorithm is used to find the maximum likelihood estimates, and the case of categorical variables with more than two categories is considered. (Author)
DEFF Research Database (Denmark)
Ditlevsen, Susanne; Samson, Adeline
2016-01-01
Dynamics of the membrane potential in a single neuron can be studied by estimating biophysical parameters from intracellular recordings. Diffusion processes, given as continuous solutions to stochastic differential equations, are widely applied as models for the neuronal membrane potential evolut...
Fast Wideband Solutions Obtained Using Model Based Parameter Estimation with Method of Moments
Directory of Open Access Journals (Sweden)
F. Kaburcuk
2017-10-01
Full Text Available Integration of the Model Based Parameter Estimation (MBPE technique into Method of Moments (MOM provides fast solutions over a wide frequency band to solve radiation and scattering problems. The MBPE technique uses the Padé rational function to approximate solutions over a wide frequency band from a solution at a fixed frequency. In this paper, the MBPE technique with MOM is applied to a thin-wire antenna. The solutions obtained by repeated simulations of MOM agree very well with the solutions obtained by MBPE technique in a single simulation. Therefore, MBPE technique according to MOM provides a remarkable saving in the computation time. Computed results show that solutions at a wider frequency band of interest are achieved in a single simulation.
Estimation of genetic parameters related to eggshell strength using random regression models.
Guo, J; Ma, M; Qu, L; Shen, M; Dou, T; Wang, K
2015-01-01
This study examined the changes in eggshell strength and the genetic parameters related to this trait throughout a hen's laying life using random regression. The data were collected from a crossbred population between 2011 and 2014, where the eggshell strength was determined repeatedly for 2260 hens. Using random regression models (RRMs), several Legendre polynomials were employed to estimate the fixed, direct genetic and permanent environment effects. The residual effects were treated as independently distributed with heterogeneous variance for each test week. The direct genetic variance was included with second-order Legendre polynomials and the permanent environment with third-order Legendre polynomials. The heritability of eggshell strength ranged from 0.26 to 0.43, the repeatability ranged between 0.47 and 0.69, and the estimated genetic correlations between test weeks was high at > 0.67. The first eigenvalue of the genetic covariance matrix accounted for about 97% of the sum of all the eigenvalues. The flexibility and statistical power of RRM suggest that this model could be an effective method to improve eggshell quality and to reduce losses due to cracked eggs in a breeding plan.
Directory of Open Access Journals (Sweden)
Abdul Wahid
Full Text Available High dimensional data are commonly encountered in various scientific fields and pose great challenges to modern statistical analysis. To address this issue different penalized regression procedures have been introduced in the litrature, but these methods cannot cope with the problem of outliers and leverage points in the heavy tailed high dimensional data. For this purppose, a new Robust Adaptive Lasso (RAL method is proposed which is based on pearson residuals weighting scheme. The weight function determines the compatibility of each observations and downweight it if they are inconsistent with the assumed model. It is observed that RAL estimator can correctly select the covariates with non-zero coefficients and can estimate parameters, simultaneously, not only in the presence of influential observations, but also in the presence of high multicolliearity. We also discuss the model selection oracle property and the asymptotic normality of the RAL. Simulations findings and real data examples also demonstrate the better performance of the proposed penalized regression approach.
Parinussa, R.M.; Meesters, A.G.C.A.; Liu, Y.Y.; Dorigo, W.; Wagner, W.; de Jeu, R.A.M.
2011-01-01
A time-efficient solution to estimate the error of satellite surface soil moisture from the land parameter retrieval model is presented. The errors are estimated using an analytical solution for soil moisture retrievals from this radiative-transfer-based model that derives soil moisture from
Inflation and cosmological parameter estimation
Energy Technology Data Exchange (ETDEWEB)
Hamann, J.
2007-05-15
In this work, we focus on two aspects of cosmological data analysis: inference of parameter values and the search for new effects in the inflationary sector. Constraints on cosmological parameters are commonly derived under the assumption of a minimal model. We point out that this procedure systematically underestimates errors and possibly biases estimates, due to overly restrictive assumptions. In a more conservative approach, we analyse cosmological data using a more general eleven-parameter model. We find that regions of the parameter space that were previously thought ruled out are still compatible with the data; the bounds on individual parameters are relaxed by up to a factor of two, compared to the results for the minimal six-parameter model. Moreover, we analyse a class of inflation models, in which the slow roll conditions are briefly violated, due to a step in the potential. We show that the presence of a step generically leads to an oscillating spectrum and perform a fit to CMB and galaxy clustering data. We do not find conclusive evidence for a step in the potential and derive strong bounds on quantities that parameterise the step. (orig.)
Yen, H.; Arabi, M.; Records, R.
2012-12-01
The structural complexity of comprehensive watershed models continues to increase in order to incorporate inputs at finer spatial and temporal resolutions and simulate a larger number of hydrologic and water quality responses. Hence, computational methods for parameter estimation and uncertainty analysis of complex models have gained increasing popularity. This study aims to evaluate the performance and applicability of a range of algorithms from computationally frugal approaches to formal implementations of Bayesian statistics using Markov Chain Monte Carlo (MCMC) techniques. The evaluation procedure hinges on the appraisal of (i) the quality of final parameter solution in terms of the minimum value of the objective function corresponding to weighted errors; (ii) the algorithmic efficiency in reaching the final solution; (iii) the marginal posterior distributions of model parameters; (iv) the overall identifiability of the model structure; and (v) the effectiveness in drawing samples that can be classified as behavior-giving solutions. The proposed procedure recognize an important and often neglected issue in watershed modeling that solutions with minimum objective function values may not necessarily reflect the behavior of the system. The general behavior of a system is often characterized by the analysts according to the goals of studies using various error statistics such as percent bias or Nash-Sutcliffe efficiency coefficient. Two case studies are carried out to examine the efficiency and effectiveness of four Bayesian approaches including Metropolis-Hastings sampling (MHA), Gibbs sampling (GSA), uniform covering by probabilistic rejection (UCPR), and differential evolution adaptive Metropolis (DREAM); a greedy optimization algorithm dubbed dynamically dimensioned search (DDS); and shuffle complex evolution (SCE-UA), a widely implemented evolutionary heuristic optimization algorithm. The Soil and Water Assessment Tool (SWAT) is used to simulate hydrologic and
Duan, Q.; Schaake, J.; Andreassian, V.; Franks, S.; Goteti, G.; Gupta, H.V.; Gusev, Y.M.; Habets, F.; Hall, A.; Hay, L.; Hogue, T.; Huang, M.; Leavesley, G.; Liang, X.; Nasonova, O.N.; Noilhan, J.; Oudin, L.; Sorooshian, S.; Wagener, T.; Wood, E.F.
2006-01-01
The Model Parameter Estimation Experiment (MOPEX) is an international project aimed at developing enhanced techniques for the a priori estimation of parameters in hydrologic models and in land surface parameterization schemes of atmospheric models. The MOPEX science strategy involves three major steps: data preparation, a priori parameter estimation methodology development, and demonstration of parameter transferability. A comprehensive MOPEX database has been developed that contains historical hydrometeorological data and land surface characteristics data for many hydrologic basins in the United States (US) and in other countries. This database is being continuously expanded to include more basins in all parts of the world. A number of international MOPEX workshops have been convened to bring together interested hydrologists and land surface modelers from all over world to exchange knowledge and experience in developing a priori parameter estimation techniques. This paper describes the results from the second and third MOPEX workshops. The specific objective of these workshops is to examine the state of a priori parameter estimation techniques and how they can be potentially improved with observations from well-monitored hydrologic basins. Participants of the second and third MOPEX workshops were provided with data from 12 basins in the southeastern US and were asked to carry out a series of numerical experiments using a priori parameters as well as calibrated parameters developed for their respective hydrologic models. Different modeling groups carried out all the required experiments independently using eight different models, and the results from these models have been assembled for analysis in this paper. This paper presents an overview of the MOPEX experiment and its design. The main experimental results are analyzed. A key finding is that existing a priori parameter estimation procedures are problematic and need improvement. Significant improvement of these
Briseño, Jessica; Herrera, Graciela S.
2010-05-01
Herrera (1998) proposed a method for the optimal design of groundwater quality monitoring networks that involves space and time in a combined form. The method was applied later by Herrera et al (2001) and by Herrera and Pinder (2005). To get the estimates of the contaminant concentration being analyzed, this method uses a space-time ensemble Kalman filter, based on a stochastic flow and transport model. When the method is applied, it is important that the characteristics of the stochastic model be congruent with field data, but, in general, it is laborious to manually achieve a good match between them. For this reason, the main objective of this work is to extend the space-time ensemble Kalman filter proposed by Herrera, to estimate the hydraulic conductivity, together with hydraulic head and contaminant concentration, and its application in a synthetic example. The method has three steps: 1) Given the mean and the semivariogram of the natural logarithm of hydraulic conductivity (ln K), random realizations of this parameter are obtained through two alternatives: Gaussian simulation (SGSim) and Latin Hypercube Sampling method (LHC). 2) The stochastic model is used to produce hydraulic head (h) and contaminant (C) realizations, for each one of the conductivity realizations. With these realization the mean of ln K, h and C are obtained, for h and C, the mean is calculated in space and time, and also the cross covariance matrix h-ln K-C in space and time. The covariance matrix is obtained averaging products of the ln K, h and C realizations on the estimation points and times, and the positions and times with data of the analyzed variables. The estimation points are the positions at which estimates of ln K, h or C are gathered. In an analogous way, the estimation times are those at which estimates of any of the three variables are gathered. 3) Finally the ln K, h and C estimate are obtained using the space-time ensemble Kalman filter. The realization mean for each one
Modeling the vertical soil organic matter profile using Bayesian parameter estimation
Directory of Open Access Journals (Sweden)
M. C. Braakhekke
2013-01-01
Full Text Available The vertical distribution of soil organic matter (SOM in the profile may constitute an important factor for soil carbon cycling. However, the formation of the SOM profile is currently poorly understood due to equifinality, caused by the entanglement of several processes: input from roots, mixing due to bioturbation, and organic matter leaching. In this study we quantified the contribution of these three processes using Bayesian parameter estimation for the mechanistic SOM profile model SOMPROF. Based on organic carbon measurements, 13 parameters related to decomposition and transport of organic matter were estimated for two temperate forest soils: an Arenosol with a mor humus form (Loobos, the Netherlands, and a Cambisol with mull-type humus (Hainich, Germany. Furthermore, the use of the radioisotope ^{210}Pb_{ex} as tracer for vertical SOM transport was studied. For Loobos, the calibration results demonstrate the importance of organic matter transport with the liquid phase for shaping the vertical SOM profile, while the effects of bioturbation are generally negligible. These results are in good agreement with expectations given in situ conditions. For Hainich, the calibration offered three distinct explanations for the observations (three modes in the posterior distribution. With the addition of ^{210}Pb_{ex} data and prior knowledge, as well as additional information about in situ conditions, we were able to identify the most likely explanation, which indicated that root litter input is a dominant process for the SOM profile. For both sites the organic matter appears to comprise mainly adsorbed but potentially leachable material, pointing to the importance of organo-mineral interactions. Furthermore, organic matter in the mineral soil appears to be mainly derived from root litter, supporting previous studies that highlighted the importance of root input for soil carbon sequestration. The ^{210 }
Carey, W.P.; Simon, Andrew
1984-01-01
Simulation of upland-soil erosion by the Precipitation-Runoff Modeling System currently requires the user to estimate two rainfall detachment parameters and three hydraulic detachmment paramenters. One rainfall detachment parameter can be estimated from rainfall simulator tests. A reformulation of the rainfall detachment equation allows the second parameter to be computed directly. The three hydraulic detachment parameters consist of one exponent and two coefficients. The initial value of the exponent is generally set equal to 1.5. The two coefficients are functions of the soil 's resistance to erosion and one of the two also accounts for sediment delivery processes not simulated in the model. Initial estimates of these parameters can be derived from other modeling studies or from published empirical relations. (USGS)
Mente, Carsten; Prade, Ina; Brusch, Lutz; Breier, Georg; Deutsch, Andreas
2011-07-01
Lattice-gas cellular automata (LGCAs) can serve as stochastic mathematical models for collective behavior (e.g. pattern formation) emerging in populations of interacting cells. In this paper, a two-phase optimization algorithm for global parameter estimation in LGCA models is presented. In the first phase, local minima are identified through gradient-based optimization. Algorithmic differentiation is adopted to calculate the necessary gradient information. In the second phase, for global optimization of the parameter set, a multi-level single-linkage method is used. As an example, the parameter estimation algorithm is applied to a LGCA model for early in vitro angiogenic pattern formation.
Peters-Lidard, Christa D.
2011-01-01
Center (EMC) for their land data assimilation systems to support weather and climate modeling. LIS not only consolidates the capabilities of these two systems, but also enables a much larger variety of configurations with respect to horizontal spatial resolution, input datasets and choice of land surface model through "plugins". LIS has been coupled to the Weather Research and Forecasting (WRF) model to support studies of land-atmosphere coupling be enabling ensembles of land surface states to be tested against multiple representations of the atmospheric boundary layer. LIS has also been demonstrated for parameter estimation, who showed that the use of sequential remotely sensed soil moisture products can be used to derive soil hydraulic and texture properties given a sufficient dynamic range in the soil moisture retrievals and accurate precipitation inputs.LIS has also recently been demonstrated for multi-model data assimilation using an Ensemble Kalman Filter for sequential assimilation of soil moisture, snow, and temperature.Ongoing work has demonstrated the value of bias correction as part of the filter, and also that of joint calibration and assimilation.Examples and case studies demonstrating the capabilities and impacts of LIS for hydrometeorological modeling, assimilation and parameter estimation will be presented as advancements towards the next generation of integrated observation and modeling systems
Parameter Estimation Using VLA Data
Venter, Willem C.
The main objective of this dissertation is to extract parameters from multiple wavelength images, on a pixel-to-pixel basis, when the images are corrupted with noise and a point spread function. The data used are from the field of radio astronomy. The very large array (VLA) at Socorro in New Mexico was used to observe planetary nebula NGC 7027 at three different wavelengths, 2 cm, 6 cm and 20 cm. A temperature model, describing the temperature variation in the nebula as a function of optical depth, is postulated. Mathematical expressions for the brightness distribution (flux density) of the nebula, at the three observed wavelengths, are obtained. Using these three equations and the three data values available, one from the observed flux density map at each wavelength, it is possible to solve for two temperature parameters and one optical depth parameter at each pixel location. Due to the fact that the number of unknowns equal the number of equations available, estimation theory cannot be used to smooth any noise present in the data values. It was found that a direct solution of the three highly nonlinear flux density equations is very sensitive to noise in the data. Results obtained from solving for the three unknown parameters directly, as discussed above, were not physical realizable. This was partly due to the effect of incomplete sampling at the time when the data were gathered and to noise in the system. The application of rigorous digital parameter estimation techniques result in estimated parameters that are also not physically realizable. The estimated values for the temperature parameters are for example either too high or negative, which is not physically possible. Simulation studies have shown that a "double smoothing" technique improves the results by a large margin. This technique consists of two parts: in the first part the original observed data are smoothed using a running window and in the second part a similar smoothing of the estimated parameters
Touhidul Mustafa, Syed Md.; Nossent, Jiri; Ghysels, Gert; Huysmans, Marijke
2017-04-01
Transient numerical groundwater flow models have been used to understand and forecast groundwater flow systems under anthropogenic and climatic effects, but the reliability of the predictions is strongly influenced by different sources of uncertainty. Hence, researchers in hydrological sciences are developing and applying methods for uncertainty quantification. Nevertheless, spatially distributed flow models pose significant challenges for parameter and spatially distributed input estimation and uncertainty quantification. In this study, we present a general and flexible approach for input and parameter estimation and uncertainty analysis of groundwater models. The proposed approach combines a fully distributed groundwater flow model (MODFLOW) with the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm. To avoid over-parameterization, the uncertainty of the spatially distributed model input has been represented by multipliers. The posterior distributions of these multipliers and the regular model parameters were estimated using DREAM. The proposed methodology has been applied in an overexploited aquifer in Bangladesh where groundwater pumping and recharge data are highly uncertain. The results confirm that input uncertainty does have a considerable effect on the model predictions and parameter distributions. Additionally, our approach also provides a new way to optimize the spatially distributed recharge and pumping data along with the parameter values under uncertain input conditions. It can be concluded from our approach that considering model input uncertainty along with parameter uncertainty is important for obtaining realistic model predictions and a correct estimation of the uncertainty bounds.
Lika, Konstadia; Kearney, Michael R.; Freitas, Vânia; van der Veer, Henk W.; van der Meer, Jaap; Wijsman, Johannes W. M.; Pecquerie, Laure; Kooijman, Sebastiaan A. L. M.
2011-11-01
The Dynamic Energy Budget (DEB) theory for metabolic organisation captures the processes of development, growth, maintenance, reproduction and ageing for any kind of organism throughout its life-cycle. However, the application of DEB theory is challenging because the state variables and parameters are abstract quantities that are not directly observable. We here present a new approach of parameter estimation, the covariation method, that permits all parameters of the standard Dynamic Energy Budget (DEB) model to be estimated from standard empirical datasets. Parameter estimates are based on the simultaneous minimization of a weighted sum of squared deviations between a number of data sets and model predictions or the minimisation of the negative log likelihood function, both in a single-step procedure. The structure of DEB theory permits the unusual situation of using single data-points (such as the maximum reproduction rate), which we call "zero-variate" data, for estimating parameters. We also introduce the concept of "pseudo-data", exploiting the rules for the covariation of parameter values among species that are implied by the standard DEB model. This allows us to introduce the concept of a generalised animal, which has specified parameter values. We here outline the philosophy behind the approach and its technical implementation. In a companion paper, we assess the behaviour of the estimation procedure and present preliminary findings of emerging patterns in parameter values across diverse taxa.
International Nuclear Information System (INIS)
Giacobbo, F.; Marseguerra, M.; Zio, E.
2002-01-01
Mathematical models are widely used within the performance assessment of radioactive waste repositories to describe the behaviour of groundwater systems under the various physical conditions encountered throughout the long time scales involved. The effectiveness of such predictive models largely depends on the accuracy with which the involved parameters can be determined. In the present paper, we investigate the feasibility of using genetic algorithms for estimating the parameters of a groundwater contaminant transport model. The genetic algorithms are numerical search tools aiming at finding the global optimum of a given real objective function of one or more real variables, possibly subject to various linear or non linear constraints. The search procedures provided by the genetic algorithms resemble certain principles of natural evolution. In the case study here, the transport of contaminants through a three-layered monodimensional saturated medium is numerically simulated by a monodimensional advection-dispersion model. The associated velocity and dispersivity parameters are estimated by a genetic algorithm whose objective function is the sum of the squared residuals between pseudo-experimental data, obtained with the true values of the parameters, and the concentration profiles computed with the model using the estimated values of the parameters. The results indicate that the method is capable of estimating the parameters values with accuracy, also when in presence of substantial noise. Furthermore, we investigate the possibility of extracting some qualitative information regarding the sensitivity of the model to the unknown input parameters from the speed of convergence and stabilization of the identification procedure
DEFF Research Database (Denmark)
Millar, Robert John; Ekstrom, Jussi; Lehtonen, Matti
With the increase in distributed generation, the demand-only nature of many secondary substation nodes in medium voltage networks is becoming a mix of temporally varying consumption and generation with significant stochastic components. Traditional planning, however, has often assumed that the ma......With the increase in distributed generation, the demand-only nature of many secondary substation nodes in medium voltage networks is becoming a mix of temporally varying consumption and generation with significant stochastic components. Traditional planning, however, has often assumed...... that the maximum demands of all connected substations are fully coincident, and in cases where there is local generation, the conditions of maximum consumption and minimum generation, and maximum generation and minimum consumption are checked, again assuming unity coincidence. Statistical modelling is used...... in this paper to produce network solutions that optimize investment, running and interruption costs, assessed from a societal perspective. The decoupled utilization of expected consumption profiles and stochastic generation models enables a more detailed estimation of the driving parameters using the Monte...
Aggarwal, Ankush
2017-08-01
Motivated by the well-known result that stiffness of soft tissue is proportional to the stress, many of the constitutive laws for soft tissues contain an exponential function. In this work, we analyze properties of the exponential function and how it affects the estimation and comparison of elastic parameters for soft tissues. In particular, we find that as a consequence of the exponential function there are lines of high covariance in the elastic parameter space. As a result, one can have widely varying mechanical parameters defining the tissue stiffness but similar effective stress-strain responses. Drawing from elementary algebra, we propose simple changes in the norm and the parameter space, which significantly improve the convergence of parameter estimation and robustness in the presence of noise. More importantly, we demonstrate that these changes improve the conditioning of the problem and provide a more robust solution in the case of heterogeneous material by reducing the chances of getting trapped in a local minima. Based upon the new insight, we also propose a transformed parameter space which will allow for rational parameter comparison and avoid misleading conclusions regarding soft tissue mechanics.
Ratnayake, Nalin A.; Waggoner, Erin R.; Taylor, Brian R.
2011-01-01
The problem of parameter estimation on hybrid-wing-body aircraft is complicated by the fact that many design candidates for such aircraft involve a large number of aerodynamic control effectors that act in coplanar motion. This adds to the complexity already present in the parameter estimation problem for any aircraft with a closed-loop control system. Decorrelation of flight and simulation data must be performed in order to ascertain individual surface derivatives with any sort of mathematical confidence. Non-standard control surface configurations, such as clamshell surfaces and drag-rudder modes, further complicate the modeling task. In this paper, time-decorrelation techniques are applied to a model structure selected through stepwise regression for simulated and flight-generated lateral-directional parameter estimation data. A virtual effector model that uses mathematical abstractions to describe the multi-axis effects of clamshell surfaces is developed and applied. Comparisons are made between time history reconstructions and observed data in order to assess the accuracy of the regression model. The Cram r-Rao lower bounds of the estimated parameters are used to assess the uncertainty of the regression model relative to alternative models. Stepwise regression was found to be a useful technique for lateral-directional model design for hybrid-wing-body aircraft, as suggested by available flight data. Based on the results of this study, linear regression parameter estimation methods using abstracted effectors are expected to perform well for hybrid-wing-body aircraft properly equipped for the task.
Mathematical models for predicting the transport and fate of pollutants in the environment require reactivity parameter values that is value of the physical and chemical constants that govern reactivity. Although empirical structure activity relationships have been developed th...
Digital Repository Service at National Institute of Oceanography (India)
Chakraborty, B.; Kodagali, V.N.
In this paper, Helmholtz-Kirchhoff (H-K) roughness model is employed to characterize seafloor sediment and roughness parameters from the eastern sector of the Southern Oceans The multibeam- Hydroswcep system's angular-backscatter data, which...
Directory of Open Access Journals (Sweden)
Zaäfri Ananto Husodo
2015-04-01
Full Text Available This research proposes a numerical approach in estimating the trend of behavior of this market. This approach is applied to a model that is inspired by catalytic chemical model, in terms of differential equations, on four composite indices, New York Stock Exchange, Hong Kong Hang Seng, Straits Times Index, and Jakarta Stock Exchange, as suggested by Caetano and Yoneyama (2011. The approach is used to minimize the difference of estimated indices based on the model with respect to the actual data set. The result shows that the estimation is able to capture the trend of behavior in stock market well.
International Nuclear Information System (INIS)
Niu, Qun; Zhang, Letian; Li, Kang
2014-01-01
Highlights: • Solar cell and PEM fuel cell parameter estimations are investigated in the paper. • A new biogeography-based method (BBO-M) is proposed for cell parameter estimations. • In BBO-M, two mutation operators are designed to enhance optimization performance. • BBO-M provides a competitive alternative in cell parameter estimation problems. - Abstract: Mathematical models are useful tools for simulation, evaluation, optimal operation and control of solar cells and proton exchange membrane fuel cells (PEMFCs). To identify the model parameters of these two type of cells efficiently, a biogeography-based optimization algorithm with mutation strategies (BBO-M) is proposed. The BBO-M uses the structure of biogeography-based optimization algorithm (BBO), and both the mutation motivated from the differential evolution (DE) algorithm and the chaos theory are incorporated into the BBO structure for improving the global searching capability of the algorithm. Numerical experiments have been conducted on ten benchmark functions with 50 dimensions, and the results show that BBO-M can produce solutions of high quality and has fast convergence rate. Then, the proposed BBO-M is applied to the model parameter estimation of the two type of cells. The experimental results clearly demonstrate the power of the proposed BBO-M in estimating model parameters of both solar and fuel cells
Jastrzembski, Tiffany S.; Charness, Neil
2009-01-01
The authors estimate weighted mean values for nine information processing parameters for older adults using the Card, Moran, and Newell (1983) Model Human Processor model. The authors validate a subset of these parameters by modeling two mobile phone tasks using two different phones and comparing model predictions to a sample of younger (N = 20; Mage = 20) and older (N = 20; Mage = 69) adults. Older adult models fit keystroke-level performance at the aggregate grain of analysis extremely well (R = 0.99) and produced equivalent fits to previously validated younger adult models. Critical path analyses highlighted points of poor design as a function of cognitive workload, hardware/software design, and user characteristics. The findings demonstrate that estimated older adult information processing parameters are valid for modeling purposes, can help designers understand age-related performance using existing interfaces, and may support the development of age-sensitive technologies. PMID:18194048
DEFF Research Database (Denmark)
Chon, K H; Cohen, R J; Holstein-Rathlou, N H
1997-01-01
A linear and nonlinear autoregressive moving average (ARMA) identification algorithm is developed for modeling time series data. The algorithm uses Laguerre expansion of kernals (LEK) to estimate Volterra-Wiener kernals. However, instead of estimating linear and nonlinear system dynamics via movi...
International Nuclear Information System (INIS)
Delforge, J.; Syrota, A.; Mazoyer, B.M.
1989-01-01
General framework and various criteria for experimental design optimisation are presented. The methodology is applied to estimation of receptor-ligand reaction model parameters with dynamic positron emission tomography data. The possibility of improving parameter estimation using a new experimental design combining an injection of the β + -labelled ligand and an injection of the cold ligand is investigated. Numerical simulations predict remarkable improvement in the accuracy of parameter estimates with this new experimental design and particularly the possibility of separate estimations of the association constant (k +1 ) and of receptor density (B' max ) in a single experiment. Simulation predictions are validated using experimental PET data in which parameter uncertainties are reduced by factors ranging from 17 to 1000. (author)
Markov chain Monte Carlo approach to parameter estimation in the FitzHugh-Nagumo model
DEFF Research Database (Denmark)
Jensen, Anders Christian; Ditlevsen, Susanne; Kessler, Mathieu
2012-01-01
Excitability is observed in a variety of natural systems, such as neuronal dynamics, cardiovascular tissues, or climate dynamics. The stochastic FitzHugh-Nagumo model is a prominent example representing an excitable system. To validate the practical use of a model, the first step is to estimate m...... handle multidimensional nonlinear diffusions with large time scale separation. The estimation method is illustrated on simulated data....
Data Handling and Parameter Estimation
DEFF Research Database (Denmark)
Sin, Gürkan; Gernaey, Krist
2016-01-01
literature that are mostly based on the ActivatedSludge Model (ASM) framework and their appropriate extensions (Henze et al., 2000).The chapter presents an overview of the most commonly used methods in the estimation of parameters from experimental batch data, namely: (i) data handling and validation, (ii...... and spatial scales. At full-scale wastewater treatment plants (WWTPs),mechanistic modelling using the ASM framework and concept (e.g. Henze et al., 2000) has become an important part of the engineering toolbox for process engineers. It supports plant design, operation, optimization and control applications......). Models have also been used as an integral part of the comprehensive analysis and interpretation of data obtained from a range of experimental methods from the laboratory, as well as pilot-scale studies to characterise and study wastewater treatment plants. In this regard, models help to properly explain...
Spatial scale effects on model parameter estimation and predictive uncertainty in ungauged basins
CSIR Research Space (South Africa)
Hughes, DA
2013-06-01
Full Text Available The most appropriate scale to use for hydrological modelling depends on the structure of the chosen model, the purpose of the results and the resolution of the available data used to quantify parameter values and provide the climatic forcing data...
Estimation of physical parameters in induction motors
DEFF Research Database (Denmark)
Børsting, H.; Knudsen, Morten; Rasmussen, Henrik
1994-01-01
Parameter estimation in induction motors is a field of great interest, because accurate models are needed for robust dynamic control of induction motors......Parameter estimation in induction motors is a field of great interest, because accurate models are needed for robust dynamic control of induction motors...
A Modified NM-PSO Method for Parameter Estimation Problems of Models
Liu, An; Zahara, Erwie; Yang, Ming-Ta
2012-01-01
Ordinary differential equations usefully describe the behavior of a wide range of dynamic physical systems. The particle swarm optimization (PSO) method has been considered an effective tool for solving the engineering optimization problems for ordinary differential equations. This paper proposes a modified hybrid Nelder-Mead simplex search and particle swarm optimization (M-NM-PSO) method for solving parameter estimation problems. The M-NM-PSO method improves the efficiency of the PSO method...
A Novel Non-Iterative Method for Real-Time Parameter Estimation of the Fricke-Morse Model
Directory of Open Access Journals (Sweden)
SIMIC, M.
2016-11-01
Full Text Available Parameter estimation of Fricke-Morse model of biological tissue is widely used in bioimpedance data processing and analysis. Complex nonlinear least squares (CNLS data fitting is often used for parameter estimation of the model, but limitations such as high processing time, converging into local minimums, need for good initial guess of model parameters and non-convergence have been reported. Thus, there is strong motivation to develop methods which can solve these flaws. In this paper a novel real-time method for parameter estimation of Fricke-Morse model of biological cells is presented. The proposed method uses the value of characteristic frequency estimated from the measured imaginary part of bioimpedance, whereupon the Fricke-Morse model parameters are calculated using the provided analytical expressions. The proposed method is compared with CNLS in frequency ranges of 1 kHz to 10 MHz (beta-dispersion and 10 kHz to 100 kHz, which is more suitable for low-cost microcontroller-based bioimpedance measurement systems. The obtained results are promising, and in both frequency ranges, CNLS and the proposed method have accuracies suitable for most electrical bioimpedance (EBI applications. However, the proposed algorithm has significantly lower computation complexity, so it was 20-80 times faster than CNLS.
Post, Hanna; Vrugt, Jasper A.; Fox, Andrew; Vereecken, Harry; Hendricks Franssen, Harrie-Jan
2017-03-01
The Community Land Model (CLM) contains many parameters whose values are uncertain and thus require careful estimation for model application at individual sites. Here we used Bayesian inference with the DiffeRential Evolution Adaptive Metropolis (DREAM(zs)) algorithm to estimate eight CLM v.4.5 ecosystem parameters using 1 year records of half-hourly net ecosystem CO2 exchange (NEE) observations of four central European sites with different plant functional types (PFTs). The posterior CLM parameter distributions of each site were estimated per individual season and on a yearly basis. These estimates were then evaluated using NEE data from an independent evaluation period and data from "nearby" FLUXNET sites at 600 km distance to the original sites. Latent variables (multipliers) were used to treat explicitly uncertainty in the initial carbon-nitrogen pools. The posterior parameter estimates were superior to their default values in their ability to track and explain the measured NEE data of each site. The seasonal parameter values reduced with more than 50% (averaged over all sites) the bias in the simulated NEE values. The most consistent performance of CLM during the evaluation period was found for the posterior parameter values of the forest PFTs, and contrary to the C3-grass and C3-crop sites, the latent variables of the initial pools further enhanced the quality-of-fit. The carbon sink function of the forest PFTs significantly increased with the posterior parameter estimates. We thus conclude that land surface model predictions of carbon stocks and fluxes require careful consideration of uncertain ecological parameters and initial states.
Revisiting a model of ontogenetic growth: estimating model parameters from theory and data.
Moses, Melanie E; Hou, Chen; Woodruff, William H; West, Geoffrey B; Nekola, Jeffery C; Zuo, Wenyun; Brown, James H
2008-05-01
The ontogenetic growth model (OGM) of West et al. provides a general description of how metabolic energy is allocated between production of new biomass and maintenance of existing biomass during ontogeny. Here, we reexamine the OGM, make some minor modifications and corrections, and further evaluate its ability to account for empirical variation on rates of metabolism and biomass in vertebrates both during ontogeny and across species of varying adult body size. We show that the updated version of the model is internally consistent and is consistent with other predictions of metabolic scaling theory and empirical data. The OGM predicts not only the near universal sigmoidal form of growth curves but also the M(1/4) scaling of the characteristic times of ontogenetic stages in addition to the curvilinear decline in growth efficiency described by Brody. Additionally, the OGM relates the M(3/4) scaling across adults of different species to the scaling of metabolic rate across ontogeny within species. In providing a simple, quantitative description of how energy is allocated to growth, the OGM calls attention to unexplained variation, unanswered questions, and opportunities for future research.
Energy Technology Data Exchange (ETDEWEB)
Liu, J.; Ukita, M.; Nakanishi, H.; Imai, T. [Yamaguchi University, Yamaguchi (Japan); Fukagawa, M. [Ube Technical College, Yamaguchi (Japan)
1995-08-21
A laboratory study was used to develop a simplified kinetic model, to evaluate the kinetic parameters, and to provide rational design parameters for a pilot plant treating flax retting wastewater by means of the simulation of optimal operation of the UASB reactor. The results indicated that the developed model can be used predicatively for assessing plant performance and when the concentration of the influent is at the range of 5.5-7.3gCOD/l, the concentration of the hard-biodegradable materials is 0.46 gCOD/l. 14 refs., 9 figs., 3 tabs.
Khonde, Ruta Dhanram; Chaurasia, Ashish Subhash
2015-04-01
The present study provides the kinetic model to describe the pyrolysis of sawdust, rice-husk and sugarcane bagasse as biomass. The kinetic scheme used for modelling of primary pyrolysis consisting of the two parallel reactions giving gaseous volatiles and solid char. Estimation of kinetic parameters for pyrolysis process has been carried out for temperature range of 773-1,173 K. As there are serious issues regarding non-convergence of some of the methods or solutions converging to local-optima, the proposed kinetic model is optimized to predict the best values of kinetic parameters for the system using three approaches—Two-dimensional surface fitting non-linear regression technique, MS-Excel Solver Tool and COMSOL software. The model predictions are in agreement with experimental data over a wide range of pyrolysis conditions. The estimated value of kinetic parameters are compared with earlier researchers and found to be matching well.
El Gharamti, Mohamad
2015-11-26
The ensemble Kalman filter (EnKF) recursively integrates field data into simulation models to obtain a better characterization of the model’s state and parameters. These are generally estimated following a state-parameters joint augmentation strategy. In this study, we introduce a new smoothing-based joint EnKF scheme, in which we introduce a one-step-ahead smoothing of the state before updating the parameters. Numerical experiments are performed with a two-dimensional synthetic subsurface contaminant transport model. The improved performance of the proposed joint EnKF scheme compared to the standard joint EnKF compensates for the modest increase in the computational cost.
Zuhdi, Shaifudin; Saputro, Dewi Retno Sari
2017-03-01
GWOLR model used for represent relationship between dependent variable has categories and scale of category is ordinal with independent variable influenced the geographical location of the observation site. Parameters estimation of GWOLR model use maximum likelihood provide system of nonlinear equations and hard to be found the result in analytic resolution. By finishing it, it means determine the maximum completion, this thing associated with optimizing problem. The completion nonlinear system of equations optimize use numerical approximation, which one is Newton Raphson method. The purpose of this research is to make iteration algorithm Newton Raphson and program using R software to estimate GWOLR model. Based on the research obtained that program in R can be used to estimate the parameters of GWOLR model by forming a syntax program with command "while".
Estimation of parameters of constant elasticity of substitution production functional model
Mahaboob, B.; Venkateswarlu, B.; Sankar, J. Ravi
2017-11-01
Nonlinear model building has become an increasing important powerful tool in mathematical economics. In recent years the popularity of applications of nonlinear models has dramatically been rising up. Several researchers in econometrics are very often interested in the inferential aspects of nonlinear regression models [6]. The present research study gives a distinct method of estimation of more complicated and highly nonlinear model viz Constant Elasticity of Substitution (CES) production functional model. Henningen et.al [5] proposed three solutions to avoid serious problems when estimating CES functions in 2012 and they are i) removing discontinuities by using the limits of the CES function and its derivative. ii) Circumventing large rounding errors by local linear approximations iii) Handling ill-behaved objective functions by a multi-dimensional grid search. Joel Chongeh et.al [7] discussed the estimation of the impact of capital and labour inputs to the gris output agri-food products using constant elasticity of substitution production function in Tanzanian context. Pol Antras [8] presented new estimates of the elasticity of substitution between capital and labour using data from the private sector of the U.S. economy for the period 1948-1998.
Vegetation-specific model parameters are not required for estimating gross primary production
Czech Academy of Sciences Publication Activity Database
Yuan, W.; Cai, W.; Liu, S.; Dong, W.; Chen, J.; Altaf Arain, M.; Blanken, P. D.; Cescatti, A.; Wohlfahrt, G.; Georgiadis, T.; Genesio, L.; Gianelle, D.; Grelle, A.; Kiely, G.; Knohl, A.; Liu, D.; Marek, Michal V.; Merbold, L.; Montagnani, L.; Panferov, O.; Peltoniemi, M.; Rambal, S.; Raschi, A.; Varlagin, A.; Xia, J.
2014-01-01
Roč. 292, NOV 24 2014 (2014), s. 1-10 ISSN 0304-3800 Institutional support: RVO:67179843 Keywords : light use efficiency * gross primary production * model parameters Subject RIV: EH - Ecology, Behaviour Impact factor: 2.321, year: 2014
Estimation of root water uptake parameters by inverse modeling with soil water content data
Hupet, F.; Lambot, S.; Feddes, R.A.; Dam, van J.C.; Vanclooster, M.
2003-01-01
In this paper we have tested the feasibility of the inverse modeling approach to derive root water uptake parameters (RWUP) from soil water content data using numerical experiments for three differently textured soils and for an optimal drying period. The RWUP of interest are the rooting depth and
On The Estimation of Parameters of Thick Current Shell Model of ...
African Journals Online (AJOL)
Equatorial electrojet, an intense current flowing eastward in the low latitude ionosphere within the narrow region flanking the dip equator, is a major phenomenon of interest in geomagnetic field studies. For the first time the five parameters required to fully describe the Onwumechili\\'s composite thick current shell model ...
Estimation parameters and black box model of a brushless DC motor
Directory of Open Access Journals (Sweden)
José A. Becerra-Vargas
2014-08-01
Full Text Available The modeling of a process or a plant is vital for the design of its control system, since it allows predicting its dynamic and behavior under different circumstances, inputs, disturbances and noise. The main objective of this work is to identify which model is best for a permanent magnet brushless DC specific motor. For this, the mathematical model of a DC motor brushless PW16D, manufactured by Golden Motor, is presented and compared with its black box model; both are derived from experimental data. These data, the average applied voltage and the angular velocity, are acquired by a data acquisition card and imported to the computer. The constants of the mathematical model are estimated by a curve fitting algorithm based on non-linear least squares and pattern search using computational tool. To estimate the mathematical model constants by non-linear least square and search pattern, a goodness of fit of 84.88% and 80.48% respectively was obtained. The goodness of fit obtained by the black box model was 87.72%. The mathematical model presented slightly lower goodness of fit, but allowed to analyze the behavior of variables of interest such as the power consumption and the torque applied to the motor. Because of this, it is concluded that the mathematical model obtained by experimental data of the brushless motor PW16D, is better than its black box model.
Parameter Estimation of Inverter and Motor Model at Standstill using Measured Currents Only
DEFF Research Database (Denmark)
Rasmussen, Henrik; Knudsen, Morten; Tønnes, M.
1996-01-01
Methods for estimation of the parameters in the electrical equivalent diagram for the induction motor, based on special designed experiments, are given. In all experriments two of the three phases are given the same potential, i.e., no net torque is generatedand the motor is at standstill. Input...... to the system is the reference values for the stator voltages given as duty cycles for the Pulse With Modulated power device. The system output is the measured stator currents. Three experiments are describedgiving respectively 1) the stator resistance and inverter parameters, 2) the stator transient inductance...... and 3) the referred rotor rotor resistance and magnetizing inductance. The method developed in the two last experiments is independent of the inverter nonlinearity. New methods for system identification concerning saturation of the magnetic flux are given and a reference value for the flux level...
Lika, Konstadia; Kearney, Michael R.; Kooijman, Sebastiaan A. L. M.
2011-11-01
The covariation method for estimating the parameters of the standard Dynamic Energy Budget (DEB) model provides a single-step method of accessing all the core DEB parameters from commonly available empirical data. In this study, we assess the robustness of this parameter estimation procedure and analyse the role of pseudo-data using elasticity coefficients. In particular, we compare the performance of Maximum Likelihood (ML) vs. Weighted Least Squares (WLS) approaches and find that the two approaches tend to converge in performance as the number of uni-variate data sets increases, but that WLS is more robust when data sets comprise single points (zero-variate data). The efficiency of the approach is shown to be high, and the prior parameter estimates (pseudo-data) have very little influence if the real data contain information about the parameter values. For instance, the effects of the pseudo-value for the allocation fraction κ is reduced when there is information for both growth and reproduction, that for the energy conductance is reduced when information on age at birth and puberty is given, and the effects of the pseudo-value for the maturity maintenance rate coefficient are insignificant. The estimation of some parameters (e.g., the zoom factor and the shape coefficient) requires little information, while that of others (e.g., maturity maintenance rate, puberty threshold and reproduction efficiency) require data at several food levels. The generality of the standard DEB model, in combination with the estimation of all of its parameters, allows comparison of species on the basis of parameter values. We discuss a number of preliminary patterns emerging from the present collection of parameter estimates across a wide variety of taxa. We make the observation that the estimated value of the fraction κ of mobilised reserve that is allocated to soma is far away from the value that maximises reproduction. We recognise this as the reason why two very different
Cosmological parameter estimation using Particle Swarm Optimization
International Nuclear Information System (INIS)
Prasad, J; Souradeep, T
2014-01-01
Constraining parameters of a theoretical model from observational data is an important exercise in cosmology. There are many theoretically motivated models, which demand greater number of cosmological parameters than the standard model of cosmology uses, and make the problem of parameter estimation challenging. It is a common practice to employ Bayesian formalism for parameter estimation for which, in general, likelihood surface is probed. For the standard cosmological model with six parameters, likelihood surface is quite smooth and does not have local maxima, and sampling based methods like Markov Chain Monte Carlo (MCMC) method are quite successful. However, when there are a large number of parameters or the likelihood surface is not smooth, other methods may be more effective. In this paper, we have demonstrated application of another method inspired from artificial intelligence, called Particle Swarm Optimization (PSO) for estimating cosmological parameters from Cosmic Microwave Background (CMB) data taken from the WMAP satellite
Cosmological parameter estimation using Particle Swarm Optimization
Prasad, J.; Souradeep, T.
2014-03-01
Constraining parameters of a theoretical model from observational data is an important exercise in cosmology. There are many theoretically motivated models, which demand greater number of cosmological parameters than the standard model of cosmology uses, and make the problem of parameter estimation challenging. It is a common practice to employ Bayesian formalism for parameter estimation for which, in general, likelihood surface is probed. For the standard cosmological model with six parameters, likelihood surface is quite smooth and does not have local maxima, and sampling based methods like Markov Chain Monte Carlo (MCMC) method are quite successful. However, when there are a large number of parameters or the likelihood surface is not smooth, other methods may be more effective. In this paper, we have demonstrated application of another method inspired from artificial intelligence, called Particle Swarm Optimization (PSO) for estimating cosmological parameters from Cosmic Microwave Background (CMB) data taken from the WMAP satellite.
Parameter Estimation of Inverter and Motor Model at Standstill using Measured Currents Only
DEFF Research Database (Denmark)
Rasmussen, Henrik; Knudsen, Morten; Tønnes, M.
1996-01-01
Methods for estimation of the parameters in the electrical equivalent diagram for the induction motor, based on special designed experiments, are given. In all experriments two of the three phases are given the same potential, i.e., no net torque is generatedand the motor is at standstill. Input...... and 3) the referred rotor rotor resistance and magnetizing inductance. The method developed in the two last experiments is independent of the inverter nonlinearity. New methods for system identification concerning saturation of the magnetic flux are given and a reference value for the flux level...
Pettey, W B P; Carter, M E; Toth, D J A; Samore, M H; Gundlapalli, A V
2017-07-01
During the recent Ebola crisis in West Africa, individual person-level details of disease onset, transmissions, and outcomes such as survival or death were reported in online news media. We set out to document disease transmission chains for Ebola, with the goal of generating a timely account that could be used for surveillance, mathematical modeling, and public health decision-making. By accessing public web pages only, such as locally produced newspapers and blogs, we created a transmission chain involving two Ebola clusters in West Africa that compared favorably with other published transmission chains, and derived parameters for a mathematical model of Ebola disease transmission that were not statistically different from those derived from published sources. We present a protocol for responsibly gleaning epidemiological facts, transmission model parameters, and useful details from affected communities using mostly indigenously produced sources. After comparing our transmission parameters to published parameters, we discuss additional benefits of our method, such as gaining practical information about the affected community, its infrastructure, politics, and culture. We also briefly compare our method to similar efforts that used mostly non-indigenous online sources to generate epidemiological information.
METHODOLOGY FOR THE ESTIMATION OF PARAMETERS, OF THE MODIFIED BOUC-WEN MODEL
Directory of Open Access Journals (Sweden)
Tomasz HANISZEWSKI
2015-03-01
Full Text Available Bouc-Wen model is theoretical formulation that allows to reflect real hysteresis loop of modeled object. Such object is for example a wire rope, which is present on equipment of crane lifting mechanism. Where adopted modified version of the model has nine parameters. Determination of such a number of parameters is complex and problematic issue. In this article are shown the methodology to identify and sample results of numerical simulations. The results were compared with data obtained on the basis of laboratory tests of ropes [3] and on their basis it was found that there is compliance between results and there is possibility to apply in dynamic systems containing in their structures wire ropes [4].
Lithium-ion Battery Electrothermal Model, Parameter Estimation, and Simulation Environment
Directory of Open Access Journals (Sweden)
Simone Orcioni
2017-03-01
Full Text Available The market for lithium-ion batteries is growing exponentially. The performance of battery cells is growing due to improving production technology, but market request is growing even more rapidly. Modeling and characterization of single cells and an efficient simulation environment is fundamental for the development of an efficient battery management system. The present work is devoted to defining a novel lumped electrothermal circuit of a single battery cell, the extraction procedure of the parameters of the single cell from experiments, and a simulation environment in SystemC-WMS for the simulation of a battery pack. The electrothermal model of the cell was validated against experimental measurements obtained in a climatic chamber. The model is then used to simulate a 48-cell battery, allowing statistical variations among parameters. The different behaviors of the cells in terms of state of charge, current, voltage, or heat flow rate can be observed in the results of the simulation environment.
DeGroot, B J; Keown, J F; Van Vleck, L D; Kachman, S D
2007-06-30
Genetic parameters were estimated with restricted maximum likelihood for individual test-day milk, fat, and protein yields and somatic cell scores with a random regression cubic spline model. Test-day records of Holstein cows that calved from 1994 through early 1999 were obtained from Dairy Records Management Systems in Raleigh, North Carolina, for the analysis. Estimates of heritability for individual test-days and estimates of genetic and phenotypic correlations between test-days were obtained from estimates of variances and covariances from the cubic spline analysis. Estimates were calculated of genetic parameters for the averages of the test days within each of the ten 30-day test intervals. The model included herd test-day, age at first calving, and bovine somatropin treatment as fixed factors. Cubic splines were fitted for the overall lactation curve and for random additive genetic and permanent environmental effects, with five predetermined knots or four intervals between days 0, 50, 135, 220, and 305. Estimates of heritability for lactation one ranged from 0.10 to 0.15, 0.06 to 0.10, 0.09 to 0.15, and 0.02 to 0.06 for test-day one to test-day 10 for milk, fat, and protein yields and somatic cell scores, respectively. Estimates of heritability were greater in lactations two and three. Estimates of heritability increased over the course of the lactation. Estimates of genetic and phenotypic correlations were smaller for test-days further apart.
International Nuclear Information System (INIS)
Bavio, José; Marrón, Beatriz
2014-01-01
Quality of service (QoS) for internet traffic management requires good traffic models and good estimation of sharing network resource. A link of a network processes all traffic and it is designed with certain capacity C and buffer size B. A Generalized Markov Fluid model (GMFM), introduced by Marrón (2011), is assumed for the sources because describes in a versatile way the traffic, allows estimation based on traffic traces, and also consistent effective bandwidth estimation can be done. QoS, interpreted as buffer overflow probability, can be estimated for GMFM through the effective bandwidth estimation and solving the optimization problem presented in Courcoubetis (2002), the so call inf-sup formulas. In this work we implement a code to solve the inf-sup problem and other optimization related with it, that allow us to do traffic engineering in links of data networks to calculate both, minimum capacity required when QoS and buffer size are given or minimum buffer size required when QoS and capacity are given
Optimal Estimation of Phenological Crop Model Parameters for Rice (Oryza sativa)
Sharifi, H.; Hijmans, R. J.; Espe, M.; Hill, J. E.; Linquist, B.
2015-12-01
Crop phenology models are important components of crop growth models. In the case of phenology models, generally only a few parameters are calibrated and default cardinal temperatures are used which can lead to a temperature-dependent systematic phenology prediction error. Our objective was to evaluate different optimization approaches in the Oryza2000 and CERES-Rice phenology sub-models to assess the importance of optimizing cardinal temperatures on model performance and systematic error. We used two optimization approaches: the typical single-stage (planting to heading) and three-stage model optimization (for planting to panicle initiation (PI), PI to heading (HD), and HD to physiological maturity (MT)) to simultaneously optimize all model parameters. Data for this study was collected over three years and six locations on seven California rice cultivars. A temperature-dependent systematic error was found for all cultivars and stages, however it was generally small (systematic error Oryza2000 and from 6.6 to 3.8 in CERES-Rice. With regards to systematic error, we found a trade-off between RMSE and systematic error when optimization objective set to minimize RMSE or systematic error. Therefore, it is important to find the limits within which the trade-offs between RMSE and systematic error are acceptable, especially in climate change studies where this can prevent erroneous conclusions.
Andrew D. Richardson; Mathew Williams; David Y. Hollinger; David J.P. Moore; D. Bryan Dail; Eric A. Davidson; Neal A. Scott; Robert S. Evans; Holly. Hughes
2010-01-01
We conducted an inverse modeling analysis, using a variety of data streams (tower-based eddy covariance measurements of net ecosystem exchange, NEE, of CO2, chamber-based measurements of soil respiration, and ancillary ecological measurements of leaf area index, litterfall, and woody biomass increment) to estimate parameters and initial carbon (C...
Chen, Yung-Fu; Du, Yi-Chun; Tsai, Yi-Ting; Chen, Tainsong
Osteoporosis is a systemic skeletal disease, which is characterized by low bone mass and micro-architectural deterioration of bone tissue, leading to bone fragility. Finding an effective method for prevention and early diagnosis of the disease is very important. Several parameters, including broadband ultrasound attenuation (BUA), speed of sound (SOS), and stiffness index (STI), have been used to measure the characteristics of bone tissues. In this paper, we proposed a method, namely modified contour deformable model (MCDM), bases on the active contour model (ACM) and active shape model (ASM) for automatically detecting the calcaneus contour from quantitative ultrasound (QUS) parametric images. The results show that the difference between the contours detected by the MCDM and the true boundary for the phantom is less than one pixel. By comparing the phantom ROIs, significant relationship was found between contour mean and bone mineral density (BMD) with R=0.99. The influence of selecting different ROI diameters (12, 14, 16 and 18 mm) and different region-selecting methods, including fixed region (ROI fix ), automatic circular region (ROI cir ) and calcaneal contour region (ROI anat ), were evaluated for testing human subjects. Measurements with large ROI diameters, especially using fixed region, result in high position errors (10-45%). The precision errors of the measured ultrasonic parameters for ROI anat are smaller than ROI fix and ROI cir . In conclusion, ROI anat provides more accurate measurement of ultrasonic parameters for the evaluation of osteoporosis and is useful for clinical application.
Parameter Estimation of Dynamic Multi-zone Models for Livestock Indoor Climate Control
DEFF Research Database (Denmark)
Wu, Zhuang; Stoustrup, Jakob; Heiselberg, Per
2008-01-01
and winter at a real scale livestock building in Denmark. The obtained comparative results between the measured data and the simulated output confirm that a very simple multi-zone model can capture the salient dynamical features of the climate dynamics which are needed for control purposes......., the livestock, the ventilation system and the building on the dynamic performance of indoor climate. Some significant parameters employed in the climate model as well as the airflow interaction between each conceptual zone are identified with the use of experimental time series data collected during spring...
Gharamti, M. E.
2015-05-11
The ensemble Kalman filter (EnKF) is a popular method for state-parameters estimation of subsurface flow and transport models based on field measurements. The common filtering procedure is to directly update the state and parameters as one single vector, which is known as the Joint-EnKF. In this study, we follow the one-step-ahead smoothing formulation of the filtering problem, to derive a new joint-based EnKF which involves a smoothing step of the state between two successive analysis steps. The new state-parameters estimation scheme is derived in a consistent Bayesian filtering framework and results in separate update steps for the state and the parameters. This new algorithm bears strong resemblance with the Dual-EnKF, but unlike the latter which first propagates the state with the model then updates it with the new observation, the proposed scheme starts by an update step, followed by a model integration step. We exploit this new formulation of the joint filtering problem and propose an efficient model-integration-free iterative procedure on the update step of the parameters only for further improved performances. Numerical experiments are conducted with a two-dimensional synthetic subsurface transport model simulating the migration of a contaminant plume in a heterogenous aquifer domain. Contaminant concentration data are assimilated to estimate both the contaminant state and the hydraulic conductivity field. Assimilation runs are performed under imperfect modeling conditions and various observational scenarios. Simulation results suggest that the proposed scheme efficiently recovers both the contaminant state and the aquifer conductivity, providing more accurate estimates than the standard Joint and Dual EnKFs in all tested scenarios. Iterating on the update step of the new scheme further enhances the proposed filter’s behavior. In term of computational cost, the new Joint-EnKF is almost equivalent to that of the Dual-EnKF, but requires twice more model
Alaa F. Sheta; Amal Abdel-Raouf
2016-01-01
In this age of technology, building quality software is essential to competing in the business market. One of the major principles required for any quality and business software product for value fulfillment is reliability. Estimating software reliability early during the software development life cycle saves time and money as it prevents spending larger sums fixing a defective software product after deployment. The Software Reliability Growth Model (SRGM) can be used to predict the number of...
Directory of Open Access Journals (Sweden)
F. C. PEIXOTO
1999-09-01
Full Text Available Fragmentation kinetics is employed to model a continuous reactive mixture of alkanes under catalytic cracking conditions. Standard moment analysis techniques are employed, and a dynamic system for the time evolution of moments of the mixture's dimensionless concentration distribution function (DCDF is found. The time behavior of the DCDF is recovered with successive estimations of scaled gamma distributions using the moments time data.
Lerche, Veronika; Voss, Andreas; Nagler, Markus
2017-04-01
Diffusion models (Ratcliff, 1978) make it possible to identify and separate different cognitive processes underlying responses in binary decision tasks (e.g., the speed of information accumulation vs. the degree of response conservatism). This becomes possible because of the high degree of information utilization involved. Not only mean response times or error rates are used for the parameter estimation, but also the response time distributions of both correct and error responses. In a series of simulation studies, the efficiency and robustness of parameter recovery were compared for models differing in complexity (i.e., in numbers of free parameters) and trial numbers (ranging from 24 to 5,000) using three different optimization criteria (maximum likelihood, Kolmogorov-Smirnov, and chi-square) that are all implemented in the latest version of fast-dm (Voss, Voss, & Lerche, 2015). The results revealed that maximum likelihood is superior for uncontaminated data, but in the presence of fast contaminants, Kolmogorov-Smirnov outperforms the other two methods. For most conditions, chi-square-based parameter estimations lead to less precise results than the other optimization criteria. The performance of the fast-dm methods was compared to the EZ approach (Wagenmakers, van der Maas, & Grasman, 2007) and to a Bayesian implementation (Wiecki, Sofer, & Frank, 2013). Recommendations for trial numbers are derived from the results for models of different complexities. Interestingly, under certain conditions even small numbers of trials (N < 100) are sufficient for robust parameter estimation.
DEFF Research Database (Denmark)
Chon, K H; Cohen, R J; Holstein-Rathlou, N H
1997-01-01
average models, as is the case for the Volterra-Wiener analysis, we propose an ARMA model-based approach. The proposed algorithm is essentially the same as LEK, but this algorithm is extended to include past values of the output as well. Thus, all of the advantages associated with using the Laguerre...... the physiological interpretation of higher order kernels easier. Furthermore, simulation results show better performance of the proposed approach in estimating the system dynamics than LEK in certain cases, and it remains effective in the presence of significant additive measurement noise....
ESTIMATION ACCURACY OF EXPONENTIAL DISTRIBUTION PARAMETERS
Directory of Open Access Journals (Sweden)
muhammad zahid rashid
2011-04-01
Full Text Available The exponential distribution is commonly used to model the behavior of units that have a constant failure rate. The two-parameter exponential distribution provides a simple but nevertheless useful model for the analysis of lifetimes, especially when investigating reliability of technical equipment.This paper is concerned with estimation of parameters of the two parameter (location and scale exponential distribution. We used the least squares method (LSM, relative least squares method (RELS, ridge regression method (RR, moment estimators (ME, modified moment estimators (MME, maximum likelihood estimators (MLE and modified maximum likelihood estimators (MMLE. We used the mean square error MSE, and total deviation TD, as measurement for the comparison between these methods. We determined the best method for estimation using different values for the parameters and different sample sizes
D'Agnese, F. A.; Faunt, C.C.; Hill, M.C.; Turner, A.K.
1996-01-01
A three-layer Death Valley regional groundwater flow model was constructed to evaluate potential regional groundwater flow paths in the vicinity of Yucca Mountain, Nevada. Geoscientific information systems were used to characterize the complex surface and subsurface hydrogeological conditions of the area, and this characterization was used to construct likely conceptual models of the flow system. The high contrasts and abrupt contacts of the different hydrogeological units in the subsurface make zonation the logical choice for representing the hydraulic conductivity distribution. Hydraulic head and spring flow data were used to test different conceptual models by using nonlinear regression to determine parameter values that currently provide the best match between the measured and simulated heads and flows.
A robust estimator of parameters for G_I^0 -modeled SAR imagery based on random weighting method
Wang, Cui-Huan; Wen, Xian-Bin; Xu, Hai-Xia
2017-12-01
In mono-polarized synthetic aperture radar (SAR) imagery, G_I^0 distribution often is assumed as the universal model to characterize a large number of targets, which is indexed by three parameters: the number of looks, the scale parameter, and the roughness parameter. The latter is closely related to the number of elementary backscatters in each pixel, and it is the reason why so many researchers focus on it. Although many efforts have been paid on providing many estimates, numerical problems often exist in dependable estimation, such as `outlier' and small samples and so on. Thus, a robust estimation scheme of two unknown parameters in G_I^0 distribution based on random weighting method is proposed in this paper where the relationship between moments and parameters are utilized. Experimental results on SAR computational simulations data and real SAR images show that the particular scheme outperforms alternative forms of bias reduction mechanisms, and we can obtain more accurate estimation than that of other state-of-the-art algorithms.
Fan, M.
2015-03-29
Parameter estimation is a challenging computational problemin the reverse engineering of biological systems. Because advances in biotechnology have facilitated wide availability of time-series gene expression data, systematic parameter esti- mation of gene circuitmodels fromsuch time-series mRNA data has become an importantmethod for quantitatively dissecting the regulation of gene expression. By focusing on themodeling of gene circuits, we examine here the perform- ance of three types of state-of-the-art parameter estimation methods: population-basedmethods, onlinemethods and model-decomposition-basedmethods. Our results show that certain population-basedmethods are able to generate high- quality parameter solutions. The performance of thesemethods, however, is heavily dependent on the size of the param- eter search space, and their computational requirements substantially increase as the size of the search space increases. In comparison, onlinemethods andmodel decomposition-basedmethods are computationally faster alternatives and are less dependent on the size of the search space. Among other things, our results show that a hybrid approach that augments computationally fastmethods with local search as a subsequent refinement procedure can substantially increase the qual- ity of their parameter estimates to the level on par with the best solution obtained fromthe population-basedmethods whilemaintaining high computational speed. These suggest that such hybridmethods can be a promising alternative to themore commonly used population-basedmethods for parameter estimation of gene circuit models when limited prior knowledge about the underlying regulatorymechanismsmakes the size of the parameter search space vastly large. © The Author 2015. Published by Oxford University Press.
A practical approach to parameter estimation applied to model predicting heart rate regulation
DEFF Research Database (Denmark)
Olufsen, Mette; Ottesen, Johnny T.
2013-01-01
baroreceptor feedback regulation of heart rate during head-up tilt. The three methods include: structured analysis of the correlation matrix, analysis via singular value decomposition followed by QR factorization, and identification of the subspace closest to the one spanned by eigenvectors of the model...... Hessian. Results showed that all three methods facilitate identification of a parameter subset. The “best” subset was obtained using the structured correlation method, though this method was also the most computationally intensive. Subsets obtained using the other two methods were easier to compute...
Brambilla, Mattia; Ficetola, Gentile F
2012-07-01
1. Correlative species distribution models (SDMs) assess relationships between species distribution data and environmental features, to evaluate the environmental suitability (ES) of a given area for a species, by providing a measure of the probability of presence. If the output of SDMs represents the relationships between habitat features and species performance well, SDM results can be related also to other key parameters of populations, including reproductive parameters. To test this hypothesis, we evaluated whether SDM results can be used as a proxy of reproductive parameters (breeding output, territory size) in red-backed shrikes (Lanius collurio). 2. The distribution of 726 shrike territories in Northern Italy was obtained through multiple focused surveys; for a subset of pairs, we also measured territory area and number of fledged juveniles. We used Maximum Entropy modelling to build a SDM on the basis of territory distribution. We used generalized least squares and spatial generalized mixed models to relate territory size and number of fledged juveniles to SDM suitability, while controlling for spatial autocorrelation. 3. Species distribution models predicted shrike distribution very well. Territory size was negatively related to suitability estimated through SDM, while the number of fledglings significantly increased with the suitability of the territory. This was true also when SDM was built using only spatially and temporally independent data. 4. Results show a clear relationship between ES estimated through presence-only SDMs and two key parameters related to species' reproduction, suggesting that suitability estimated by SDM, and habitat quality determining reproduction parameters in our model system, are correlated. Our study shows the potential use of SDMs to infer important fitness parameters; this information can have great importance in management and conservation. © 2012 The Authors. Journal of Animal Ecology © 2012 British Ecological
Teuber, T.; Steidl, G.; Chan, R. H.
2013-03-01
In this paper, we analyze the minimization of seminorms ‖L · ‖ on {R}^n under the constraint of a bounded I-divergence D(b, H · ) for rather general linear operators H and L. The I-divergence is also known as Kullback-Leibler divergence and appears in many models in imaging science, in particular when dealing with Poisson data but also in the case of multiplicative Gamma noise. Often H represents, e.g., a linear blur operator and L is some discrete derivative or frame analysis operator. A central part of this paper consists in proving relations between the parameters of I-divergence constrained and penalized problems. To solve the I-divergence constrained problem, we consider various first-order primal-dual algorithms which reduce the problem to the solution of certain proximal minimization problems in each iteration step. One of these proximation problems is an I-divergence constrained least-squares problem which can be solved based on Morozov’s discrepancy principle by a Newton method. We prove that these algorithms produce not only a sequence of vectors which converges to a minimizer of the constrained problem but also a sequence of parameters which converges to a regularization parameter so that the corresponding penalized problem has the same solution. Furthermore, we derive a rule for automatically setting the constraint parameter for data corrupted by multiplicative Gamma noise. The performance of the various algorithms is finally demonstrated for different image restoration tasks both for images corrupted by Poisson noise and multiplicative Gamma noise.
International Nuclear Information System (INIS)
Teuber, T; Steidl, G; Chan, R H
2013-01-01
In this paper, we analyze the minimization of seminorms ‖L · ‖ on R n under the constraint of a bounded I-divergence D(b, H · ) for rather general linear operators H and L. The I-divergence is also known as Kullback–Leibler divergence and appears in many models in imaging science, in particular when dealing with Poisson data but also in the case of multiplicative Gamma noise. Often H represents, e.g., a linear blur operator and L is some discrete derivative or frame analysis operator. A central part of this paper consists in proving relations between the parameters of I-divergence constrained and penalized problems. To solve the I-divergence constrained problem, we consider various first-order primal–dual algorithms which reduce the problem to the solution of certain proximal minimization problems in each iteration step. One of these proximation problems is an I-divergence constrained least-squares problem which can be solved based on Morozov’s discrepancy principle by a Newton method. We prove that these algorithms produce not only a sequence of vectors which converges to a minimizer of the constrained problem but also a sequence of parameters which converges to a regularization parameter so that the corresponding penalized problem has the same solution. Furthermore, we derive a rule for automatically setting the constraint parameter for data corrupted by multiplicative Gamma noise. The performance of the various algorithms is finally demonstrated for different image restoration tasks both for images corrupted by Poisson noise and multiplicative Gamma noise. (paper)
State and parameter estimation of state-space model with entry-wise correlated uniform noise
Czech Academy of Sciences Publication Activity Database
Pavelková, Lenka; Kárný, Miroslav
2014-01-01
Roč. 28, č. 11 (2014), s. 1189-1205 ISSN 0890-6327 R&D Projects: GA TA ČR TA01030123; GA ČR GA13-13502S Institutional research plan: CEZ:AV0Z1075907 Keywords : state-space models * bounded noise * filtering problems * estimation algorithms * uncertain dynamic systems Subject RIV: BC - Control Systems Theory Impact factor: 1.346, year: 2014 http://library.utia.cas.cz/separaty/2014/AS/pavelkova-0422958.pdf
Zhang, Yonggen; Schaap, Marcel G.
2017-04-01
Pedotransfer functions (PTFs) have been widely used to predict soil hydraulic parameters in favor of expensive laboratory or field measurements. Rosetta (Schaap et al., 2001, denoted as Rosetta1) is one of many PTFs and is based on artificial neural network (ANN) analysis coupled with the bootstrap re-sampling method which allows the estimation of van Genuchten water retention parameters (van Genuchten, 1980, abbreviated here as VG), saturated hydraulic conductivity (Ks), and their uncertainties. In this study, we present an improved set of hierarchical pedotransfer functions (Rosetta3) that unify the water retention and Ks submodels into one. Parameter uncertainty of the fit of the VG curve to the original retention data is used in the ANN calibration procedure to reduce bias of parameters predicted by the new PTF. One thousand bootstrap replicas were used to calibrate the new models compared to 60 or 100 in Rosetta1, thus allowing the uni-variate and bi-variate probability distributions of predicted parameters to be quantified in greater detail. We determined the optimal weights for VG parameters and Ks, the optimal number of hidden nodes in ANN, and the number of bootstrap replicas required for statistically stable estimates. Results show that matric potential-dependent bias was reduced significantly while root mean square error (RMSE) for water content were reduced modestly; RMSE for Ks was increased by 0.9% (H3w) to 3.3% (H5w) in the new models on log scale of Ks compared with the Rosetta1 model. It was found that estimated distributions of parameters were mildly non-Gaussian and could instead be described rather well with heavy-tailed α-stable distributions. On the other hand, arithmetic means had only a small estimation bias for most textures when compared with the mean-like "shift" parameter of the α-stable distributions. Arithmetic means and (co-)variances are therefore still recommended as summary statistics of the estimated distributions. However, it
Ekinci, Yunus Levent; Balkaya, Çağlayan; Göktürkler, Gökhan; Turan, Seçil
2016-06-01
An efficient approach to estimate model parameters from residual gravity data based on differential evolution (DE), a stochastic vector-based metaheuristic algorithm, has been presented. We have showed the applicability and effectiveness of this algorithm on both synthetic and field anomalies. According to our knowledge, this is a first attempt of applying DE for the parameter estimations of residual gravity anomalies due to isolated causative sources embedded in the subsurface. The model parameters dealt with here are the amplitude coefficient (A), the depth and exact origin of causative source (zo and xo, respectively) and the shape factors (q and ƞ). The error energy maps generated for some parameter pairs have successfully revealed the nature of the parameter estimation problem under consideration. Noise-free and noisy synthetic single gravity anomalies have been evaluated with success via DE/best/1/bin, which is a widely used strategy in DE. Additionally some complicated gravity anomalies caused by multiple source bodies have been considered, and the results obtained have showed the efficiency of the algorithm. Then using the strategy applied in synthetic examples some field anomalies observed for various mineral explorations such as a chromite deposit (Camaguey district, Cuba), a manganese deposit (Nagpur, India) and a base metal sulphide deposit (Quebec, Canada) have been considered to estimate the model parameters of the ore bodies. Applications have exhibited that the obtained results such as the depths and shapes of the ore bodies are quite consistent with those published in the literature. Uncertainty in the solutions obtained from DE algorithm has been also investigated by Metropolis-Hastings (M-H) sampling algorithm based on simulated annealing without cooling schedule. Based on the resulting histogram reconstructions of both synthetic and field data examples the algorithm has provided reliable parameter estimations being within the sampling limits of
Kinetic parameter estimation in N. europaea biofilms using a 2-D reactive transport model.
Lauchnor, Ellen G; Semprini, Lewis; Wood, Brian D
2015-06-01
Biofilms of the ammonia oxidizing bacterium Nitrosomonas europaea were cultivated to study microbial processes associated with ammonia oxidation in pure culture. We explored the hypothesis that the kinetic parameters of ammonia oxidation in N. europaea biofilms were in the range of those determined with batch suspended cells. Oxygen and pH microelectrodes were used to measure dissolved oxygen (DO) concentrations and pH above and inside biofilms and reactive transport modeling was performed to simulate the measured DO and pH profiles. A two dimensional (2-D) model was used to simulate advection parallel to the biofilm surface and diffusion through the overlying fluid while reaction and diffusion were simulated in the biofilm. Three experimental studies of microsensor measurements were performed with biofilms: i) NH3 concentrations near the Ksn value of 40 μM determined in suspended cell tests ii) Limited buffering capacity which resulted in a pH gradient within the biofilms and iii) NH3 concentrations well below the Ksn value. Very good fits to the DO concentration profiles both in the fluid above and in the biofilms were achieved using the 2-D model. The modeling study revealed that the half-saturation coefficient for NH3 in N. europaea biofilms was close to the value measured in suspended cells. However, the third study of biofilms with low availability of NH3 deviated from the model prediction. The model also predicted shifts in the DO profiles and the gradient in pH that resulted for the case of limited buffering capacity. The results illustrate the importance of incorporating both key transport and chemical processes in a biofilm reactive transport model. © 2014 Wiley Periodicals, Inc.
State and parameter estimation in bio processes
Energy Technology Data Exchange (ETDEWEB)
Maher, M.; Roux, G.; Dahhou, B. [Centre National de la Recherche Scientifique (CNRS), 31 - Toulouse (France)]|[Institut National des Sciences Appliquees (INSA), 31 - Toulouse (France)
1994-12-31
A major difficulty in monitoring and control of bio-processes is the lack of reliable and simple sensors for following the evolution of the main state variables and parameters such as biomass, substrate, product, growth rate, etc... In this article, an adaptive estimation algorithm is proposed to recover the state and parameters in bio-processes. This estimator utilizes the physical process model and the reference model approach. Experimentations concerning estimation of biomass and product concentrations and specific growth rate, during batch, fed-batch and continuous fermentation processes are presented. The results show the performance of this adaptive estimation approach. (authors) 12 refs.
Ding, Ying; Nan, Bin
2013-01-01
In many semiparametric models that are parameterized by two types of parameters – a Euclidean parameter of interest and an infinite-dimensional nuisance parameter, the two parameters are bundled together, i.e., the nuisance parameter is an unknown function that contains the parameter of interest as part of its argument. For example, in a linear regression model for censored survival data, the unspecified error distribution function involves the regression coefficients. Motivated by developing an efficient estimating method for the regression parameters, we propose a general sieve M-theorem for bundled parameters and apply the theorem to deriving the asymptotic theory for the sieve maximum likelihood estimation in the linear regression model for censored survival data. The numerical implementation of the proposed estimating method can be achieved through the conventional gradient-based search algorithms such as the Newton-Raphson algorithm. We show that the proposed estimator is consistent and asymptotically normal and achieves the semiparametric efficiency bound. Simulation studies demonstrate that the proposed method performs well in practical settings and yields more efficient estimates than existing estimating equation based methods. Illustration with a real data example is also provided. PMID:24436500
Estimation of semolina dough rheological parameters by inversion of a finite elements model
Directory of Open Access Journals (Sweden)
Angelo Fabbri
2015-10-01
Full Text Available The description of the rheological properties of food material plays an important role in food engineering. Particularly for the optimisation of pasta manufacturing process (extrusion is needful to know the rheological properties of semolina dough. Unfortunately characterisation of non-Newtonian fluids, such as food doughs, requires a notable time effort, especially in terms of number of tests to be carried out. The present work proposes an alternative method, based on the combination of laboratory measurement, made with a simplified tool, with the inversion of a finite elements numerical model. To determine the rheological parameters, an objective function, defined as the distance between simulation and experimental data, was considered and the well-known Levenberg-Marqard optimisation algorithm was used. In order to verify the feasibility of the method, the rheological characterisation of the dough was carried also by a traditional procedure. Results shown that the difference between measurements of rheological parameters of the semolina dough made with traditional procedure and inverse methods are very small (maximum percentage error equal to 3.6%. This agreement supports the coherence of the inverse method that, in general, may be used to characterise many non-Newtonian materials.
Application of spreadsheet to estimate infiltration parameters
Directory of Open Access Journals (Sweden)
Mohammad Zakwan
2016-09-01
Full Text Available Infiltration is the process of flow of water into the ground through the soil surface. Soil water although contributes a negligible fraction of total water present on earth surface, but is of utmost importance for plant life. Estimation of infiltration rates is of paramount importance for estimation of effective rainfall, groundwater recharge, and designing of irrigation systems. Numerous infiltration models are in use for estimation of infiltration rates. The conventional graphical approach for estimation of infiltration parameters often fails to estimate the infiltration parameters precisely. The generalised reduced gradient (GRG solver is reported to be a powerful tool for estimating parameters of nonlinear equations and it has, therefore, been implemented to estimate the infiltration parameters in the present paper. Field data of infiltration rate available in literature for sandy loam soils of Umuahia, Nigeria were used to evaluate the performance of GRG solver. A comparative study of graphical method and GRG solver shows that the performance of GRG solver is better than that of conventional graphical method for estimation of infiltration rates. Further, the performance of Kostiakov model has been found to be better than the Horton and Philip's model in most of the cases based on both the approaches of parameter estimation.
Mesoscopic modeling and parameter estimation of a lithium-ion battery based on LiFePO4/graphite
Jokar, Ali; Désilets, Martin; Lacroix, Marcel; Zaghib, Karim
2018-03-01
A novel numerical model for simulating the behavior of lithium-ion batteries based on LiFePO4(LFP)/graphite is presented. The model is based on the modified Single Particle Model (SPM) coupled to a mesoscopic approach for the LFP electrode. The model comprises one representative spherical particle as the graphite electrode, and N LFP units as the positive electrode. All the SPM equations are retained to model the negative electrode performance. The mesoscopic model rests on non-equilibrium thermodynamic conditions and uses a non-monotonic open circuit potential for each unit. A parameter estimation study is also carried out to identify all the parameters needed for the model. The unknown parameters are the solid diffusion coefficient of the negative electrode (Ds,n), reaction-rate constant of the negative electrode (Kn), negative and positive electrode porosity (εn&εn), initial State-Of-Charge of the negative electrode (SOCn,0), initial partial composition of the LFP units (yk,0), minimum and maximum resistance of the LFP units (Rmin&Rmax), and solution resistance (Rcell). The results show that the mesoscopic model can simulate successfully the electrochemical behavior of lithium-ion batteries at low and high charge/discharge rates. The model also describes adequately the lithiation/delithiation of the LFP particles, however, it is computationally expensive compared to macro-based models.
Czech Academy of Sciences Publication Activity Database
Papáček, Š.; Čelikovský, Sergej; Rehák, Branislav; Štys, D.
2010-01-01
Roč. 80, č. 6 (2010), s. 1302-1309 ISSN 0378-4754 R&D Projects: GA ČR(CZ) GA102/08/0186 Institutional research plan: CEZ:AV0Z10750506 Keywords : Photosynthetic factory * Experimental design * Parameter estimation * Two-scale modeling Subject RIV: BC - Control Systems Theory Impact factor: 0.812, year: 2010 http://library.utia.cas.cz/separaty/2010/TR/celikovsky-0341543.pdf
ESTIMATION OF GENETIC PARAMETERS IN TROPICARNE CATTLE WITH RANDOM REGRESSION MODELS USING B-SPLINES
Directory of Open Access Journals (Sweden)
Joel DomÃnguez Viveros
2015-04-01
Full Text Available The objectives were to estimate variance components, and direct (h2 and maternal (m2 heritability in the growth of Tropicarne cattle based on a random regression model using B-Splines for random effects modeling. Information from 12 890 monthly weightings of 1787 calves, from birth to 24 months old, was analyzed. The pedigree included 2504 animals. The random effects model included genetic and permanent environmental (direct and maternal of cubic order, and residuals. The fixed effects included contemporaneous groups (year â€“ season of weighed, sex and the covariate age of the cow (linear and quadratic. The B-Splines were defined in four knots through the growth period analyzed. Analyses were performed with the software Wombat. The variances (phenotypic and residual presented a similar behavior; of 7 to 12 months of age had a negative trend; from birth to 6 months and 13 to 18 months had positive trend; after 19 months were maintained constant. The m2 were low and near to zero, with an average of 0.06 in an interval of 0.04 to 0.11; the h2 also were close to zero, with an average of 0.10 in an interval of 0.03 to 0.23.
Koyaguchi, T.; Anderson, K. R.; Kozono, T.
2017-12-01
Recent development of conduit-flow models has revealed that the evolution of a volcanic eruption (e.g., changes in magma discharge rate and chamber pressure) is sensitively dependent on model parameters related to geological and petrological conditions (such as properties of the magma and the volume and depth of the magma chamber). On the other hand, time-varying observations of ground deformation and magma discharge rate are now increasingly available, which allows us to estimate the model parameters through a Bayesian inverse analysis of those observations (Anderson and Segall, 2013); however, this approach has not yet been applied to explosive eruptions because of mathematical and computational difficulties in the conduit-flow models. Here, we perform a Bayesian inverse to estimate the conduit-flow model parameters of explosive eruptions utilizing an approximate time-dependent eruption model. This model is based on the analytical solutions of a steady conduit-flow model (Koyaguchi, 2005; Kozono and Koyaguchi, 2009) coupled to a simple elastic magma chamber. It reproduces diverse features of evolutions of magma discharge rate and chamber pressure during explosive eruptions, and also allows us to analytically derive the mathematical relationships describing those evolutions. The derived relationships show that the mass flow rate just before the cessation of explosive eruptions is expressed by a simple function of dimensionless magma viscosity and dimensionless gas-permeability in magma. We are also able to derive a relationship between dimensionless viscosity and mass flow rate, which may be available from field data. Consequently, the posterior probability density functions of the conduit-flow model parameters of explosive eruptions (e.g., the radius of conduit) are constrained by the intersection of these two relationships. The validity of the present analytical method was tested by a numerical method using a Markov chain Monte Carlo (MCMC) algorithm. We also
Parameter estimation and inverse problems
Aster, Richard C; Thurber, Clifford H
2005-01-01
Parameter Estimation and Inverse Problems primarily serves as a textbook for advanced undergraduate and introductory graduate courses. Class notes have been developed and reside on the World Wide Web for faciliting use and feedback by teaching colleagues. The authors'' treatment promotes an understanding of fundamental and practical issus associated with parameter fitting and inverse problems including basic theory of inverse problems, statistical issues, computational issues, and an understanding of how to analyze the success and limitations of solutions to these probles. The text is also a practical resource for general students and professional researchers, where techniques and concepts can be readily picked up on a chapter-by-chapter basis.Parameter Estimation and Inverse Problems is structured around a course at New Mexico Tech and is designed to be accessible to typical graduate students in the physical sciences who may not have an extensive mathematical background. It is accompanied by a Web site that...
Whittington, Jesse; Sawaya, Michael A
2015-01-01
Capture-recapture studies are frequently used to monitor the status and trends of wildlife populations. Detection histories from individual animals are used to estimate probability of detection and abundance or density. The accuracy of abundance and density estimates depends on the ability to model factors affecting detection probability. Non-spatial capture-recapture models have recently evolved into spatial capture-recapture models that directly include the effect of distances between an animal's home range centre and trap locations on detection probability. Most studies comparing non-spatial and spatial capture-recapture biases focussed on single year models and no studies have compared the accuracy of demographic parameter estimates from open population models. We applied open population non-spatial and spatial capture-recapture models to three years of grizzly bear DNA-based data from Banff National Park and simulated data sets. The two models produced similar estimates of grizzly bear apparent survival, per capita recruitment, and population growth rates but the spatial capture-recapture models had better fit. Simulations showed that spatial capture-recapture models produced more accurate parameter estimates with better credible interval coverage than non-spatial capture-recapture models. Non-spatial capture-recapture models produced negatively biased estimates of apparent survival and positively biased estimates of per capita recruitment. The spatial capture-recapture grizzly bear population growth rates and 95% highest posterior density averaged across the three years were 0.925 (0.786-1.071) for females, 0.844 (0.703-0.975) for males, and 0.882 (0.779-0.981) for females and males combined. The non-spatial capture-recapture population growth rates were 0.894 (0.758-1.024) for females, 0.825 (0.700-0.948) for males, and 0.863 (0.771-0.957) for both sexes. The combination of low densities, low reproductive rates, and predominantly negative population growth
Directory of Open Access Journals (Sweden)
Jaime Araujo Cobuci
2005-03-01
Full Text Available Test-day milk yield records of 11,023 first-parity Holstein cows were used to estimate genetic parameters for milk yield during different lactation periods. (Covariance components were estimated using two random regression models, RRM1 and RRM2, and the restricted maximum likelihood method, compared by the likelihood ratio test. Additive genetic variances determined by RRM1 and additive genetic and permanent environmental variances estimated by RRM2 were described, using the Wilmink function. Residual variance was constant throughout lactation for the two models. The heritability estimates obtained by RRM1 (0.34 to 0.56 were higher than those obtained by RRM2 (0.15 to 0.31. Due to the high heritability estimates for milk yield throughout lactation and the negative genetic correlation between test-day yields during different lactation periods, the RRM1 model did not fit the data. Overall, genetic correlations between individual test days tended to decrease at the extremes of the lactation trajectory, showing values close to unity for adjacent test days. The inclusion of random regression coefficients to describe permanent environmental effects led to a more precise estimation of genetic and non-genetic effects that influence milk yield.
Crisan, Emil Gabriel
Certification requirements, optimization and minimum project costs, design of flight control laws and the implementation of flight simulators are among the principal applications of system identification in the aeronautical industry. This document examines the practical application of parameter estimation techniques to the problem of estimating helicopter stability and control derivatives from flight test data provided by Bell Helicopter Textron Canada. The purpose of this work is twofold: a time-domain application of the Output Error method using the Gauss-Newton algorithm and a frequency-domain identification method to obtain the aerodynamic and control derivatives of a helicopter. The adopted model for this study is a fully coupled, 6 degree of freedom (DoF) state space model. The technique used for rotorcraft identification in time-domain was the Maximum Likelihood Estimation method, embodied in a modified version of NASA's Maximum Likelihood Estimator program (MMLE3) obtained from the National Research Council (NRC). The frequency-domain system identification procedure is incorporated in a comprehensive package of user-oriented programs referred to as CIFERRTM. The coupled, 6 DoF model does not include the high frequency main rotor modes (flapping, lead-lag, twisting), yet it is capable of modeling rotorcraft dynamics fairly accurately as resulted from the model verification. The identification results demonstrate that MMLE3 is a powerful and effective tool for extracting reliable helicopter models from flight test data. The results obtained in frequency-domain approach demonstrated that CIFERRTM could achieve good results even on limited data.
Mohammadi, Mohammad Hossein; Vanclooster, Marnik
2012-05-01
Solute transport in partially saturated soils is largely affected by fluid velocity distribution and pore size distribution within the solute transport domain. Hence, it is possible to describe the solute transport process in terms of the pore size distribution of the soil, and indirectly in terms of the soil hydraulic properties. In this paper, we present a conceptual approach that allows predicting the parameters of the Convective Lognormal Transfer model from knowledge of soil moisture and the Soil Moisture Characteristic (SMC), parameterized by means of the closed-form model of Kosugi (1996). It is assumed that in partially saturated conditions, the air filled pore volume act as an inert solid phase, allowing the use of the Arya et al. (1999) pragmatic approach to estimate solute travel time statistics from the saturation degree and SMC parameters. The approach is evaluated using a set of partially saturated transport experiments as presented by Mohammadi and Vanclooster (2011). Experimental results showed that the mean solute travel time, μ(t), increases proportionally with the depth (travel distance) and decreases with flow rate. The variance of solute travel time σ²(t) first decreases with flow rate up to 0.4-0.6 Ks and subsequently increases. For all tested BTCs predicted solute transport with μ(t) estimated from the conceptual model performed much better as compared to predictions with μ(t) and σ²(t) estimated from calibration of solute transport at shallow soil depths. The use of μ(t) estimated from the conceptual model therefore increases the robustness of the CLT model in predicting solute transport in heterogeneous soils at larger depths. In view of the fact that reasonable indirect estimates of the SMC can be made from basic soil properties using pedotransfer functions, the presented approach may be useful for predicting solute transport at field or watershed scales. Copyright © 2012 Elsevier B.V. All rights reserved.
Reji, T K; Ravi, P M; Ajith, T L; Dileep, B N; Hegde, A G; Sarkar, P K
2012-04-01
Tritium content in air moisture, soil water, rain water and plant water samples collected around the Kaiga site, India was estimated and the scavenging ratio, wet deposition velocity and ratio of specific activities of tritium between soil water and air moisture were calculated and the results are interpreted. Scavenging ratio was found to vary from 0.06 to 1.04 with a mean of 0.46. The wet deposition velocity of tritium observed in the present study was in the range of 3.3E-03 to 1.1E-02 m s(-1) with a mean of 6.6E-03 m s(-1). The ratio of specific activity of tritium in soil moisture to that in air moisture ranged from 0.17 to 0.95 with a mean of 0.49. The specific activity of tritium in plant water in this study varied from 73 to 310 Bq l(-1). The present study is very useful for understanding the process and modelling of transfer of tritium through air/soil/plant system at the Kaiga site.
Williams, C B; Bennett, G L; Keele, J W
1995-03-01
Breed parameters for a computer model that simulated differences in the composition of empty-body gain of beef cattle, resulting from differences in postweaning level of nutrition that are not associated with empty BW, were estimated for 17 biological types of cattle (steers from F1 crosses of 16 sire breeds [Hereford, Angus, Jersey, South Devon, Limousin, Simmental, Charolais, Red Poll, Brown Swiss, Gelbvieh, Maine Anjou, Chianina, Brahman, Sahiwal, Pinzgauer, and Tarentaise] mated to Hereford and Angus dams). One value for the maximum fractional growth rate of fat-free matter (KMAX) was estimated and used across all breed types. Mature fat-free matter (FFMmat) was estimated from data on mature cows for each of the 17 breed types. Breed type values for a fattening parameter (THETA) were estimated from growth and composition data at slaughter on steers of the 17 breed types, using the previously estimated constant KMAX and breed values for FFMmat. For each breed type, THETA values were unique for given values of KMAX, FFMmat, and composition at slaughter. The results showed that THETA was most sensitive to KMAX and had similar sensitivity to FFMmat and composition at slaughter. Values for THETA were most sensitive for breed types with large THETA values (Chianina, Charolais, and Limousin crossbred steers) and least sensitive for breed types with small THETA values (purebred Angus, crossbred Jersey, and Red Poll steers).(ABSTRACT TRUNCATED AT 250 WORDS)
Colossi, Bibiana; Fleischmann, Ayan; Siqueira, Vinicius; Bitar, Ahmad Al; Paiva, Rodrigo; Fan, Fernando; Ruhoff, Anderson; Pontes, Paulo; Collischonn, Walter
2017-04-01
Large scale representation of soil moisture conditions can be achieved through hydrological simulation and remote sensing techniques. However, both methodologies have several limitations, which suggests the potential benefits of using both information together. So, this study had two main objectives: perform a cross-validation between remotely sensed soil moisture from SMOS (Soil Moisture and Ocean Salinity) L3 product and soil moisture simulated with the large scale hydrological model MGB-IPH; and to evaluate the potential benefits of including remotely sensed soil moisture for model parameter estimation. The study analyzed results in South American continent, where hydrometeorological monitoring is usually scarce. The study was performed in Paraná River Basin, an important South American basin, whose extension and particular characteristics allow the representation of different climatic, geological, and, consequently, hydrological conditions. Soil moisture estimated with SMOS was transformed from water content to a Soil Water Index (SWI) so it is comparable to the saturation degree simulated with MGB-IPH model. The multi-objective complex evolution algorithm (MOCOM-UA) was applied for model automatic calibration considering only remotely sensed soil moisture, only discharge and both information together. Results show that this type of analysis can be very useful, because it allows to recognize limitations in model structure. In the case of the hydrological model calibration, this approach can avoid the use of parameters out of range, in an attempt to compensate model limitations. Also, it indicates aspects of the model were efforts should be concentrated, in order to improve hydrological or hydraulics process representation. Automatic calibration gives an estimative about the way different information can be applied and the quality of results it might lead. We emphasize that these findings can be valuable for hydrological modeling in large scale South American
Directory of Open Access Journals (Sweden)
V. M. Khade
2013-03-01
Full Text Available The ensemble adjustment Kalman filter (EAKF is used to estimate the erodibility fraction parameter field in a coupled meteorology and dust aerosol model (Coupled Ocean/Atmosphere Mesoscale Prediction System (COAMPS over the Sahara desert. Erodibility is often employed as the key parameter to map dust source. It is used along with surface winds (or surface wind stress to calculate dust emissions. Using the Saharan desert as a test bed, a perfect model Observation System Simulation Experiments (OSSEs with 40 ensemble members, and observations of aerosol optical depth (AOD, the EAKF is shown to recover correct values of erodibility at about 80% of the points in the domain. It is found that dust advected from upstream grid points acts as noise and complicates erodibility estimation. It is also found that the rate of convergence is significantly impacted by the structure of the initial distribution of erodibility estimates; isotropic initial distributions exhibit slow convergence, while initial distributions with geographically localized structure converge more quickly. Experiments using observations of Deep Blue AOD retrievals from the MODIS satellite sensor result in erodibility estimates that are considerably lower than the values used operationally. Verification shows that the use of the tuned erodibility field results in better predictions of AOD over the west Sahara and the Arabian Peninsula.
Kunina-Habenicht, Olga; Rupp, Andre A.; Wilhelm, Oliver
2012-01-01
Using a complex simulation study we investigated parameter recovery, classification accuracy, and performance of two item-fit statistics for correct and misspecified diagnostic classification models within a log-linear modeling framework. The basic manipulated test design factors included the number of respondents (1,000 vs. 10,000), attributes (3…
Directory of Open Access Journals (Sweden)
H. Zhang
2017-09-01
Full Text Available Land surface models (LSMs use a large cohort of parameters and state variables to simulate the water and energy balance at the soil–atmosphere interface. Many of these model parameters cannot be measured directly in the field, and require calibration against measured fluxes of carbon dioxide, sensible and/or latent heat, and/or observations of the thermal and/or moisture state of the soil. Here, we evaluate the usefulness and applicability of four different data assimilation methods for joint parameter and state estimation of the Variable Infiltration Capacity Model (VIC-3L and the Community Land Model (CLM using a 5-month calibration (assimilation period (March–July 2012 of areal-averaged SPADE soil moisture measurements at 5, 20, and 50 cm depths in the Rollesbroich experimental test site in the Eifel mountain range in western Germany. We used the EnKF with state augmentation or dual estimation, respectively, and the residual resampling PF with a simple, statistically deficient, or more sophisticated, MCMC-based parameter resampling method. The performance of the calibrated LSM models was investigated using SPADE water content measurements of a 5-month evaluation period (August–December 2012. As expected, all DA methods enhance the ability of the VIC and CLM models to describe spatiotemporal patterns of moisture storage within the vadose zone of the Rollesbroich site, particularly if the maximum baseflow velocity (VIC or fractions of sand, clay, and organic matter of each layer (CLM are estimated jointly with the model states of each soil layer. The differences between the soil moisture simulations of VIC-3L and CLM are much larger than the discrepancies among the four data assimilation methods. The EnKF with state augmentation or dual estimation yields the best performance of VIC-3L and CLM during the calibration and evaluation period, yet results are in close agreement with the PF using MCMC resampling. Overall, CLM demonstrated the
Zhang, Hongjuan; Hendricks Franssen, Harrie-Jan; Han, Xujun; Vrugt, Jasper A.; Vereecken, Harry
2017-09-01
Land surface models (LSMs) use a large cohort of parameters and state variables to simulate the water and energy balance at the soil-atmosphere interface. Many of these model parameters cannot be measured directly in the field, and require calibration against measured fluxes of carbon dioxide, sensible and/or latent heat, and/or observations of the thermal and/or moisture state of the soil. Here, we evaluate the usefulness and applicability of four different data assimilation methods for joint parameter and state estimation of the Variable Infiltration Capacity Model (VIC-3L) and the Community Land Model (CLM) using a 5-month calibration (assimilation) period (March-July 2012) of areal-averaged SPADE soil moisture measurements at 5, 20, and 50 cm depths in the Rollesbroich experimental test site in the Eifel mountain range in western Germany. We used the EnKF with state augmentation or dual estimation, respectively, and the residual resampling PF with a simple, statistically deficient, or more sophisticated, MCMC-based parameter resampling method. The performance of the calibrated LSM models was investigated using SPADE water content measurements of a 5-month evaluation period (August-December 2012). As expected, all DA methods enhance the ability of the VIC and CLM models to describe spatiotemporal patterns of moisture storage within the vadose zone of the Rollesbroich site, particularly if the maximum baseflow velocity (VIC) or fractions of sand, clay, and organic matter of each layer (CLM) are estimated jointly with the model states of each soil layer. The differences between the soil moisture simulations of VIC-3L and CLM are much larger than the discrepancies among the four data assimilation methods. The EnKF with state augmentation or dual estimation yields the best performance of VIC-3L and CLM during the calibration and evaluation period, yet results are in close agreement with the PF using MCMC resampling. Overall, CLM demonstrated the best performance for
White, Corey N; Servant, Mathieu; Logan, Gordon D
2018-02-01
Researchers and clinicians are interested in estimating individual differences in the ability to process conflicting information. Conflict processing is typically assessed by comparing behavioral measures like RTs or error rates from conflict tasks. However, these measures are hard to interpret because they can be influenced by additional processes like response caution or bias. This limitation can be circumvented by employing cognitive models to decompose behavioral data into components of underlying decision processes, providing better specificity for investigating individual differences. A new class of drift-diffusion models has been developed for conflict tasks, presenting a potential tool to improve analysis of individual differences in conflict processing. However, measures from these models have not been validated for use in experiments with limited data collection. The present study assessed the validity of these models with a parameter-recovery study to determine whether and under what circumstances the models provide valid measures of cognitive processing. Three models were tested: the dual-stage two-phase model (Hübner, Steinhauser, & Lehle, Psychological Review, 117(3), 759-784, 2010), the shrinking spotlight model (White, Ratcliff, & Starns, Cognitive Psychology, 63(4), 210-238, 2011), and the diffusion model for conflict tasks (Ulrich, Schröter, Leuthold, & Birngruber, Cogntive Psychology, 78, 148-174, 2015). The validity of the model parameters was assessed using different methods of fitting the data and different numbers of trials. The results show that each model has limitations in recovering valid parameters, but they can be mitigated by adding constraints to the model. Practical recommendations are provided for when and how each model can be used to analyze data and provide measures of processing in conflict tasks.
Mathematical model in post-mortem estimation of brain edema using morphometric parameters.
Radojevic, Nemanja; Radnic, Bojana; Vucinic, Jelena; Cukic, Dragana; Lazovic, Ranko; Asanin, Bogdan; Savic, Slobodan
2017-01-01
Current autopsy principles for evaluating the existence of brain edema are based on a macroscopic subjective assessment performed by pathologists. The gold standard is a time-consuming histological verification of the presence of the edema. By measuring the diameters of the cranial cavity, as individually determined morphometric parameters, a mathematical model for rapid evaluation of brain edema was created, based on the brain weight measured during the autopsy. A cohort study was performed on 110 subjects, divided into two groups according to the histological presence or absence of (the - deleted from the text) brain edema. In all subjects, the following measures were determined: the volume and the diameters of the cranial cavity (longitudinal and transverse distance and height), the brain volume, and the brain weight. The complex mathematical algorithm revealed a formula for the coefficient ε, which is useful to conclude whether a brain edema is present or not. The average density of non-edematous brain is 0.967 g/ml, while the average density of edematous brain is 1.148 g/ml. The resulting formula for the coefficient ε is (5.79 x longitudinal distance x transverse distance)/brain weight. Coefficient ε can be calculated using measurements of the diameters of the cranial cavity and the brain weight, performed during the autopsy. If the resulting ε is less than 0.9484, it could be stated that there is cerebral edema with a reliability of 98.5%. The method discussed in this paper aims to eliminate the burden of relying on subjective assessments when determining the presence of a brain edema. Copyright © 2016 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
Iwata, Michio; Miyawaki-Kuwakado, Atsuko; Yoshida, Erika; Komori, Soichiro; Shiraishi, Fumihide
2018-02-02
In a mathematical model, estimation of parameters from time-series data of metabolic concentrations in cells is a challenging task. However, it seems that a promising approach for such estimation has not yet been established. Biochemical Systems Theory (BST) is a powerful methodology to construct a power-law type model for a given metabolic reaction system and to then characterize it efficiently. In this paper, we discuss the use of an S-system root-finding method (S-system method) to estimate parameters from time-series data of metabolite concentrations. We demonstrate that the S-system method is superior to the Newton-Raphson method in terms of the convergence region and iteration number. We also investigate the usefulness of a translocation technique and a complex-step differentiation method toward the practical application of the S-system method. The results indicate that the S-system method is useful to construct mathematical models for a variety of metabolic reaction networks. Copyright © 2018 Elsevier Inc. All rights reserved.
El Gharamti, Mohamad
2014-02-01
The accuracy of groundwater flow and transport model predictions highly depends on our knowledge of subsurface physical parameters. Assimilation of contaminant concentration data from shallow dug wells could help improving model behavior, eventually resulting in better forecasts. In this paper, we propose a joint state-parameter estimation scheme which efficiently integrates a low-rank extended Kalman filtering technique, namely the Singular Evolutive Extended Kalman (SEEK) filter, with the prominent complex-step method (CSM). The SEEK filter avoids the prohibitive computational burden of the Extended Kalman filter by updating the forecast along the directions of error growth only, called filter correction directions. CSM is used within the SEEK filter to efficiently compute model derivatives with respect to the state and parameters along the filter correction directions. CSM is derived using complex Taylor expansion and is second order accurate. It is proven to guarantee accurate gradient computations with zero numerical round-off errors, but requires complexifying the numerical code. We perform twin-experiments to test the performance of the CSM-based SEEK for estimating the state and parameters of a subsurface contaminant transport model. We compare the efficiency and the accuracy of the proposed scheme with two standard finite difference-based SEEK filters as well as with the ensemble Kalman filter (EnKF). Assimilation results suggest that the use of the CSM in the context of the SEEK filter may provide up to 80% more accurate solutions when compared to standard finite difference schemes and is competitive with the EnKF, even providing more accurate results in certain situations. We analyze the results based on two different observation strategies. We also discuss the complexification of the numerical code and show that this could be efficiently implemented in the context of subsurface flow models. © 2013 Elsevier B.V.
Mathematical Model to estimate the wind power using four-parameter Burr distribution
Liu, Sanming; Wang, Zhijie; Pan, Zhaoxu
2018-03-01
When the real probability of wind speed in the same position needs to be described, the four-parameter Burr distribution is more suitable than other distributions. This paper introduces its important properties and characteristics. Also, the application of the four-parameter Burr distribution in wind speed prediction is discussed, and the expression of probability distribution of output power of wind turbine is deduced.
Samsudin, Hayati; Auras, Rafael; Burgess, Gary; Dolan, Kirk; Soto-Valdez, Herlinda
2018-03-01
A two-step solution based on the boundary conditions of Crank's equations for mass transfer in a film was developed. Three driving factors, the diffusion (D), partition (K p,f ) and convective mass transfer coefficients (h), govern the sorption and/or desorption kinetics of migrants from polymer films. These three parameters were simultaneously estimated. They provide in-depth insight into the physics of a migration process. The first step was used to find the combination of D, K p,f and h that minimized the sums of squared errors (SSE) between the predicted and actual results. In step 2, an ordinary least square (OLS) estimation was performed by using the proposed analytical solution containing D, K p,f and h. Three selected migration studies of PLA/antioxidant-based films were used to demonstrate the use of this two-step solution. Additional parameter estimation approaches such as sequential and bootstrap were also performed to acquire a better knowledge about the kinetics of migration. The proposed model successfully provided the initial guesses for D, K p,f and h. The h value was determined without performing a specific experiment for it. By determining h together with D, under or overestimation issues pertaining to a migration process can be avoided since these two parameters are correlated. Copyright © 2017 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Macias, Julio; Vargas, Asdrubal
2017-01-01
MIM 1D transport model was successfully applied to simulate the asymmetric behavior observed in three breakthrough curves of tracer tests performed under natural gradient conditions in a phreatic fractured volcanic aquifer. The transport parameters obtained after adjustment with a computer program, suggest that only 50% of the total porosity effectively contributed to the advective-dispersive transport (mobile fraction) and the other 50% behaved as a temporary reservoir for the tracer (immobile fraction). The estimated values of hydraulic properties and MIM model parameters are within the range of values reported by other researchers. It was possible to establish a conceptual and numerical framework to explain the three-tracer tests curves behavior, despite the limitations in quality and quantity of available field information. (author) [es
Reionization history and CMB parameter estimation
Energy Technology Data Exchange (ETDEWEB)
Dizgah, Azadeh Moradinezhad; Gnedin, Nickolay Y.; Kinney, William H.
2013-05-01
We study how uncertainty in the reionization history of the universe affects estimates of other cosmological parameters from the Cosmic Microwave Background. We analyze WMAP7 data and synthetic Planck-quality data generated using a realistic scenario for the reionization history of the universe obtained from high-resolution numerical simulation. We perform parameter estimation using a simple sudden reionization approximation, and using the Principal Component Analysis (PCA) technique proposed by Mortonson and Hu. We reach two main conclusions: (1) Adopting a simple sudden reionization model does not introduce measurable bias into values for other parameters, indicating that detailed modeling of reionization is not necessary for the purpose of parameter estimation from future CMB data sets such as Planck. (2) PCA analysis does not allow accurate reconstruction of the actual reionization history of the universe in a realistic case.
International Nuclear Information System (INIS)
Hertelé, Stijn; De Waele, Wim; Denys, Rudi; Verstraete, Matthias
2012-01-01
Contemporary pipeline steels with a yield-to-tensile ratio above 0.80 often show two-stages of strain hardening, which cannot be simultaneously described by the standardized Ramberg–Osgood model. A companion paper (Part I) showed that the recently developed UGent model provides more accurate descriptions than the Ramberg–Osgood model, as it succeeds in describing both strain hardening stages. However, it may be challenging to obtain an optimal model fit in absence of full stress–strain data. This paper discusses on how to find suited parameter values for the UGent model, given a set of measurable tensile test characteristics. The proposed methodology shows good results for an extensive set of investigated experimental stress–strain curves. Next to some common tensile test characteristics, the 1.0% proof stress is needed. The authors therefore encourage the acquisition of this stress during tensile tests. - Highlights: ► An analytical procedure estimates UGent model parameters. ► The procedure requires a set of tensile test characteristics. ► The UGent model performs better than the Ramberg–Osgood model. ► Apart from common characteristics, the 1.0% proof stress is required. ► The authors encourage the acquisition of this 1.0% proof stress.
Directory of Open Access Journals (Sweden)
Maryam Baharizadeh
2012-10-01
Full Text Available Buffalo milk yield records were obtained from monthly records of the Animal Breeding Organization of Iran from 1992 to 2009 in 33 herds raised in the Khuzestan province. Variance components, heritability and repeatability were estimated for milk yield, fat yield, fat percentage, protein yield and protein percentage. These estimates were carried out through single trait animal model using DFREML program. Herd-year-season was considered as fixed effect in the model. For milk production traits, age at calving was fitted as a covariate. The additive genetic and permanent environmental effects were also included in the model. The mean values (±SD for milk yield, fat yield, fat percentage, protein yield and protein percentage were 2285.08±762.47 kg, 144.35±54.86 kg, 6.25±0.90%, 97.30±26.73 kg and 4.19±0.27%, respectively. The heritability (±SE of milk yield, fat yield, fat percentage, protein yield and protein percentage were 0.093±0.08, 0.054±0.06, 0.043±0.05, 0.093±0.16 and zero, respectively. These estimates for repeatability were 0.272, 0.132, 0.043, 0.674 and 0.0002, respectively. Lower values of genetic parameter estimates require more data and reliable pedigree records.
Selection of a model and estimation of principal pharmacokinetic parameters of dioxadet
International Nuclear Information System (INIS)
Korsakov, M.V.; Filov, V.A.; Kraiz, B.O.; Khrapova, T.N.; Ivin, B.A.
1986-01-01
The authors study the pharmacokinetics of dioxadet, taking into account all the necessary parameters. Dioxadet- 14 C was used, labeled in the ethyleneimine group, which retains radioactivity after the compound has lost its alkylating properties. The authors synthesized it from 2,2-dimethyl-5-hydroxymethyl-(2,4-dichloro-1,3,5-triazin-6-yl)amino-1,3-dioxane and ethyleneimini-2,3- 14 C, obtained from 1,2-dibromoethane- 14 C. The principal pharmacokinetic parameters of dioxadet for eight rats are shown. The synthesis of dioxadet- 14 C, the radiometry of blood samples, and the calculation of the kinetic curve are all discussed
A regional parameter estimation scheme for a pan-European multi-basin model
Directory of Open Access Journals (Sweden)
Yeshewatesfa Hundecha
2016-06-01
New hydrological insights for the region: Parameters could be linked to catchment descriptors with good transferability, with median NSE of 0.54 and 0.53, and median volume error of −1.6% and 1.3% in the calibration and validation stations, respectively. Although regionalizing parameters for different groups of catchments separately yielded a better performance in some groups, the overall gain in performance against regionalization using a single set of regional relationships across the entire domain was marginal. The benefits of separate regionalization were substantial in catchments with considerable proportion of agricultural landuse and higher mean annual temperature.
A subsystems approach for parameter estimation of ODE models of hybrid systems
Directory of Open Access Journals (Sweden)
Guido Sanguinetti
2012-08-01
Full Text Available We present a new method for parameter identification of ODE system descriptions based on data measurements. Our method works by splitting the system into a number of subsystems and working on each of them separately, thereby being easily parallelisable, and can also deal with noise in the observations.
Directory of Open Access Journals (Sweden)
Dimitrios V Vavoulis
Full Text Available Traditional approaches to the problem of parameter estimation in biophysical models of neurons and neural networks usually adopt a global search algorithm (for example, an evolutionary algorithm, often in combination with a local search method (such as gradient descent in order to minimize the value of a cost function, which measures the discrepancy between various features of the available experimental data and model output. In this study, we approach the problem of parameter estimation in conductance-based models of single neurons from a different perspective. By adopting a hidden-dynamical-systems formalism, we expressed parameter estimation as an inference problem in these systems, which can then be tackled using a range of well-established statistical inference methods. The particular method we used was Kitagawa's self-organizing state-space model, which was applied on a number of Hodgkin-Huxley-type models using simulated or actual electrophysiological data. We showed that the algorithm can be used to estimate a large number of parameters, including maximal conductances, reversal potentials, kinetics of ionic currents, measurement and intrinsic noise, based on low-dimensional experimental data and sufficiently informative priors in the form of pre-defined constraints imposed on model parameters. The algorithm remained operational even when very noisy experimental data were used. Importantly, by combining the self-organizing state-space model with an adaptive sampling algorithm akin to the Covariance Matrix Adaptation Evolution Strategy, we achieved a significant reduction in the variance of parameter estimates. The algorithm did not require the explicit formulation of a cost function and it was straightforward to apply on compartmental models and multiple data sets. Overall, the proposed methodology is particularly suitable for resolving high-dimensional inference problems based on noisy electrophysiological data and, therefore, a
Energy Technology Data Exchange (ETDEWEB)
Lee, Jared A.; Hacker, Joshua P.; Delle Monache, Luca; Kosović, Branko; Clifton, Andrew; Vandenberghe, Francois; Rodrigo, Javier Sanz
2016-12-14
A current barrier to greater deployment of offshore wind turbines is the poor quality of numerical weather prediction model wind and turbulence forecasts over open ocean. The bulk of development for atmospheric boundary layer (ABL) parameterization schemes has focused on land, partly due to a scarcity of observations over ocean. The 100-m FINO1 tower in the North Sea is one of the few sources worldwide of atmospheric profile observations from the sea surface to turbine hub height. These observations are crucial to developing a better understanding and modeling of physical processes in the marine ABL. In this study, we use the WRF single column model (SCM), coupled with an ensemble Kalman filter from the Data Assimilation Research Testbed (DART), to create 100-member ensembles at the FINO1 location. The goal of this study is to determine the extent to which model parameter estimation can improve offshore wind forecasts.
Sesana, R C; Bignardi, A B; Borquis, R R A; El Faro, L; Baldi, F; Albuquerque, L G; Tonhati, H
2010-10-01
The objective of this work was to estimate covariance functions for additive genetic and permanent environmental effects and, subsequently, to obtain genetic parameters for buffalo's test-day milk production using random regression models on Legendre polynomials (LPs). A total of 17 935 test-day milk yield (TDMY) from 1433 first lactations of Murrah buffaloes, calving from 1985 to 2005 and belonging to 12 herds located in São Paulo state, Brazil, were analysed. Contemporary groups (CGs) were defined by herd, year and month of milk test. Residual variances were modelled through variance functions, from second to fourth order and also by a step function with 1, 4, 6, 22 and 42 classes. The model of analyses included the fixed effect of CGs, number of milking, age of cow at calving as a covariable (linear and quadratic) and the mean trend of the population. As random effects were included the additive genetic and permanent environmental effects. The additive genetic and permanent environmental random effects were modelled by LP of days in milk from quadratic to seventh degree polynomial functions. The model with additive genetic and animal permanent environmental effects adjusted by quintic and sixth order LP, respectively, and residual variance modelled through a step function with six classes was the most adequate model to describe the covariance structure of the data. Heritability estimates decreased from 0.44 (first week) to 0.18 (fourth week). Unexpected negative genetic correlation estimates were obtained between TDMY records at first weeks with records from middle to the end of lactation, being the values varied from -0.07 (second with eighth week) to -0.34 (1st with 42nd week). TDMY heritability estimates were moderate in the course of the lactation, suggesting that this trait could be applied as selection criteria in milking buffaloes. Copyright 2010 Blackwell Verlag GmbH.
Cosenza, Alida; Mannina, Giorgio; Neumann, Marc B; Viviani, Gaspare; Vanrolleghem, Peter A
2013-04-01
Membrane bioreactors (MBR) are being increasingly used for wastewater treatment. Mathematical modeling of MBR systems plays a key role in order to better explain their characteristics. Several MBR models have been presented in the literature focusing on different aspects: biological models, models which include soluble microbial products (SMP), physical models able to describe the membrane fouling and integrated models which couple the SMP models with the physical models. However, only a few integrated models have been developed which take into account the relationships between membrane fouling and biological processes. With respect to biological phosphorus removal in MBR systems, due to the complexity of the process, practical use of the models is still limited. There is a vast knowledge (and consequently vast amount of data) on nutrient removal for conventional-activated sludge systems but only limited information on phosphorus removal for MBRs. Calibration of these complex integrated models still remains the main bottleneck to their employment. The paper presents an integrated mathematical model able to simultaneously describe biological phosphorus removal, SMP formation/degradation and physical processes which also include the removal of organic matter. The model has been calibrated with data collected in a UCT-MBR pilot plant, located at the Palermo wastewater treatment plant, applying a modified version of a recently developed calibration protocol. The calibrated model provides acceptable correspondence with experimental data and can be considered a useful tool for MBR design and operation.
Sky view factor as a parameter in applied climatology rapid estimation by the SkyHelios model
Directory of Open Access Journals (Sweden)
Andreas Matzarakis
2011-02-01
Full Text Available Graphic processors can be integrated in simulation models computing e.g. three-dimensional flow visualization or radiation estimation. Going a step further it is even possible to use modern graphics hardware as general-purpose array processors. These ideas and approaches use a cheap mass production technology to solve specific problems. This technology can be applied for modelling climate conditions or climate-relevant parameters on the micro-scale or with respect to urban areas. To illustrate this we present the simulation of the continuous sky view factor (SVF, thus the calculation of the SVF for each point of a complex area. Digital elevation models (DEM, data concerning urban obstacles (OBS or other digital files can serve as a data base in order to quantify relevant climatic conditions in urban and complex areas. The following benefits are provided by the new model: (a short computing time (b short development time and (c low costs due to the use of open source frameworks. The application of the developed model will be helpful to estimate radiation fluxes and the mean radiant temperature in urban and complex situations accurately, especially in combination with an urban microclimate model, e.g. the RayMan model.
Sky view factor as a parameter in applied climatology. Rapid estimation by the SkyHelios model
Energy Technology Data Exchange (ETDEWEB)
Matzarakis, Andreas; Matuschek, Olaf [Freiburg Univ. (Germany). Meteorological Inst.
2011-02-15
Graphic processors can be integrated in simulation models computing e.g. three-dimensional flow visualization or radiation estimation. Going a step further it is even possible to use modern graphics hardware as general-purpose array processors. These ideas and approaches use a cheap mass production technology to solve specific problems. This technology can be applied for modelling climate conditions or climate-relevant parameters on the micro-scale or with respect to urban areas. To illustrate this we present the simulation of the continuous sky view factor (SVF), thus the calculation of the SVF for each point of a complex area. Digital elevation models (DEM), data concerning urban obstacles (OBS) or other digital files can serve as a data base in order to quantify relevant climatic conditions in urban and complex areas. The following benefits are provided by the new model: (a) short computing time (b) short development time and (c) low costs due to the use of open source frameworks. The application of the developed model will be helpful to estimate radiation fluxes and the mean radiant temperature in urban and complex situations accurately, especially in combination with an urban microclimate model, e.g. the RayMan model. (orig.)
Fitting the Generic Multi-Parameter Crossover Model: Towards Realistic Scaling Estimates
Z.R. Struzik; E.H. Dooijes; F.C.A. Groen
1997-01-01
textabstractThe primary concern of fractal metrology is providing a means of reliable estimation of scaling exponents such as fractal dimension, in order to prove the null hypothesis that a particular object can be regarded as fractal. In the particular context to be discussed in this contribution,
Estimating FttH and FttCurb Deployment Costs Using Geometric Models with Enhanced Parameters
Phillipson, F.
2015-01-01
The need for higher bandwidth by customers urges the network providers to upgrade their networks. Fibre to the home or Fibre to the curb are two of the scenarios that are considered. To make a proper assessment on the economic viability, a good estimation of the roll-out costs of the networks are
DEFF Research Database (Denmark)
Graff, J; Fugleberg, S; Joffe, P
1995-01-01
The mechanisms of transperitoneal potassium transport during peritoneal dialysis were evaluated by validation of different mathematical models. The models were designed to elucidate the presence or absence of diffusive, non-lymphatic convective and lymphatic convective solute transport....... Experimental results were obtained from 26 non-diabetic patients undergoing peritoneal dialysis. The validation procedure demonstrated that models including both diffusive and non-lymphatic convective solute transport were superior to the other models. Lymphatic convective solute transport was not identifiable...
Stochastic EM for estimating the parameters of a multilevel IRT model
Fox, Gerardus J.A.
2003-01-01
An item response theory (IRT) model is used as a measurement error model for the dependent variable of a multilevel model. The dependent variable is latent but can be measured indirectly by using tests or questionnaires. The advantage of using latent scores as dependent variables of a multilevel
Stochastic EM for estimating the parameters of a multilevel IRT model
Fox, Gerardus J.A.
2000-01-01
An item response theory (IRT) model is used as a measurement error model for the dependent variable of a multilevel model where tests or questionnaires consisting of separate items are used to perform a measurement error analysis. The advantage of using latent scores as dependent variables of a
Biondi, Daniela; De Luca, Davide Luciano
2015-04-01
The use of rainfall-runoff models represents an alternative to statistical approaches (such as at-site or regional flood frequency analysis) for design flood estimation, and constitutes an answer to the increasing need for synthetic design hydrographs (SDHs) associated to a specific return period. However, the lack of streamflow observations and the consequent high uncertainty associated with parameter estimation, usually pose serious limitations to the use of process-based approaches in ungauged catchments, which in contrast represent the majority in practical applications. This work presents the application of a Bayesian procedure that, for a predefined rainfall-runoff model, allows for the assessment of posterior parameters distribution, using the limited and uncertain information available for the response of an ungauged catchment (Bulygina et al. 2009; 2011). The use of regional estimates of river flow statistics, interpreted as hydrological signatures that measure theoretically relevant system process behaviours (Gupta et al. 2008), within this framework represents a valuable option and has shown significant developments in recent literature to constrain the plausible model response and to reduce the uncertainty in ungauged basins. In this study we rely on the first three L-moments of annual streamflow maxima, for which regressions are available from previous studies (Biondi et al. 2012; Laio et al. 2011). The methodology was carried out for a catchment located in southern Italy, and used within a Monte Carlo scheme (MCs) considering both event-based and continuous simulation approaches for design flood estimation. The applied procedure offers promising perspectives to perform model calibration and uncertainty analysis in ungauged basins; moreover, in the context of design flood estimation, process-based methods coupled with MCs approach have the advantage of providing simulated floods uncertainty analysis that represents an asset in risk-based decision
Hadwin, Paul J; Peterson, Sean D
2017-04-01
The Bayesian framework for parameter inference provides a basis from which subject-specific reduced-order vocal fold models can be generated. Previously, it has been shown that a particle filter technique is capable of producing estimates and associated credibility intervals of time-varying reduced-order vocal fold model parameters. However, the particle filter approach is difficult to implement and has a high computational cost, which can be barriers to clinical adoption. This work presents an alternative estimation strategy based upon Kalman filtering aimed at reducing the computational cost of subject-specific model development. The robustness of this approach to Gaussian and non-Gaussian noise is discussed. The extended Kalman filter (EKF) approach is found to perform very well in comparison with the particle filter technique at dramatically lower computational cost. Based upon the test cases explored, the EKF is comparable in terms of accuracy to the particle filter technique when greater than 6000 particles are employed; if less particles are employed, the EKF actually performs better. For comparable levels of accuracy, the solution time is reduced by 2 orders of magnitude when employing the EKF. By virtue of the approximations used in the EKF, however, the credibility intervals tend to be slightly underpredicted.
Directory of Open Access Journals (Sweden)
Yuan-Chieh Lo
2018-02-01
Full Text Available Thermal characteristic analysis is essential for machine tool spindles because sudden failures may occur due to unexpected thermal issue. This article presents a lumped-parameter Thermal Network Model (TNM and its parameter estimation scheme, including hardware and software, in order to characterize both the steady-state and transient thermal behavior of machine tool spindles. For the hardware, the authors develop a Bluetooth Temperature Sensor Module (BTSM which accompanying with three types of temperature-sensing probes (magnetic, screw, and probe. Its specification, through experimental test, achieves to the precision ±(0.1 + 0.0029|t| °C, resolution 0.00489 °C, power consumption 7 mW, and size Ø40 mm × 27 mm. For the software, the heat transfer characteristics of the machine tool spindle correlative to rotating speed are derived based on the theory of heat transfer and empirical formula. The predictive TNM of spindles was developed by grey-box estimation and experimental results. Even under such complicated operating conditions as various speeds and different initial conditions, the experiments validate that the present modeling methodology provides a robust and reliable tool for the temperature prediction with normalized mean square error of 99.5% agreement, and the present approach is transferable to the other spindles with a similar structure. For realizing the edge computing in smart manufacturing, a reduced-order TNM is constructed by Model Order Reduction (MOR technique and implemented into the real-time embedded system.
Lo, Yuan-Chieh; Hu, Yuh-Chung; Chang, Pei-Zen
2018-02-23
Thermal characteristic analysis is essential for machine tool spindles because sudden failures may occur due to unexpected thermal issue. This article presents a lumped-parameter Thermal Network Model (TNM) and its parameter estimation scheme, including hardware and software, in order to characterize both the steady-state and transient thermal behavior of machine tool spindles. For the hardware, the authors develop a Bluetooth Temperature Sensor Module (BTSM) which accompanying with three types of temperature-sensing probes (magnetic, screw, and probe). Its specification, through experimental test, achieves to the precision ±(0.1 + 0.0029|t|) °C, resolution 0.00489 °C, power consumption 7 mW, and size Ø40 mm × 27 mm. For the software, the heat transfer characteristics of the machine tool spindle correlative to rotating speed are derived based on the theory of heat transfer and empirical formula. The predictive TNM of spindles was developed by grey-box estimation and experimental results. Even under such complicated operating conditions as various speeds and different initial conditions, the experiments validate that the present modeling methodology provides a robust and reliable tool for the temperature prediction with normalized mean square error of 99.5% agreement, and the present approach is transferable to the other spindles with a similar structure. For realizing the edge computing in smart manufacturing, a reduced-order TNM is constructed by Model Order Reduction (MOR) technique and implemented into the real-time embedded system.
Parameter estimation in stochastic differential equations
Bishwal, Jaya P N
2008-01-01
Parameter estimation in stochastic differential equations and stochastic partial differential equations is the science, art and technology of modelling complex phenomena and making beautiful decisions. The subject has attracted researchers from several areas of mathematics and other related fields like economics and finance. This volume presents the estimation of the unknown parameters in the corresponding continuous models based on continuous and discrete observations and examines extensively maximum likelihood, minimum contrast and Bayesian methods. Useful because of the current availability of high frequency data is the study of refined asymptotic properties of several estimators when the observation time length is large and the observation time interval is small. Also space time white noise driven models, useful for spatial data, and more sophisticated non-Markovian and non-semimartingale models like fractional diffusions that model the long memory phenomena are examined in this volume.
Botto, Anna; Camporese, Matteo
2017-04-01
Hydrological models allow scientists to predict the response of water systems under varying forcing conditions. In particular, many physically-based integrated models were recently developed in order to understand the fundamental hydrological processes occurring at the catchment scale. However, the use of this class of hydrological models is still relatively limited, as their prediction skills heavily depend on reliable parameter estimation, an operation that is never trivial, being normally affected by large uncertainty and requiring huge computational effort. The objective of this work is to test the potential of data assimilation to be used as an inverse modeling procedure for the broad class of integrated hydrological models. To pursue this goal, a Bayesian data assimilation (DA) algorithm based on a Monte Carlo approach, namely the ensemble Kalman filter (EnKF), is combined with the CATchment HYdrology (CATHY) model. In this approach, input variables (atmospheric forcing, soil parameters, initial conditions) are statistically perturbed providing an ensemble of realizations aimed at taking into account the uncertainty involved in the process. Each realization is propagated forward by the CATHY hydrological model within a parallel R framework, developed to reduce the computational effort. When measurements are available, the EnKF is used to update both the system state and soil parameters. In particular, four different assimilation scenarios are applied to test the capability of the modeling framework: first only pressure head or water content are assimilated, then, the combination of both, and finally both pressure head and water content together with the subsurface outflow. To demonstrate the effectiveness of the approach in a real-world scenario, an artificial hillslope was designed and built to provide real measurements for the DA analyses. The experimental facility, located in the Department of Civil, Environmental and Architectural Engineering of the
Characterization of uncertainty and sensitivity of model parameters is an essential and often overlooked facet of hydrological modeling. This paper introduces an algorithm called MOESHA that combines input parameter sensitivity analyses with a genetic algorithm calibration routin...
Multi-Parameter Estimation for Orthorhombic Media
Masmoudi, Nabil
2015-08-19
Building reliable anisotropy models is crucial in seismic modeling, imaging and full waveform inversion. However, estimating anisotropy parameters is often hampered by the trade off between inhomogeneity and anisotropy. For instance, one way to estimate the anisotropy parameters is to relate them analytically to traveltimes, which is challenging in inhomogeneous media. Using perturbation theory, we develop travel-time approximations for orthorhombic media as explicit functions of the anellipticity parameters η1, η2 and a parameter Δγ in inhomogeneous background media. Specifically, our expansion assumes inhomogeneous ellipsoidal anisotropic background model, which can be obtained from well information and stacking velocity analysis. This approach has two main advantages: in one hand, it provides a computationally efficient tool to solve the orthorhombic eikonal equation, on the other hand, it provides a mechanism to scan for the best fitting anisotropy parameters without the need for repetitive modeling of traveltimes, because the coefficients of the traveltime expansion are independent of the perturbed parameters. Furthermore, the coefficients of the traveltime expansion provide insights on the sensitivity of the traveltime with respect to the perturbed parameters. We show the accuracy of the traveltime approximations as well as an approach for multi-parameter scanning in orthorhombic media.
Advances in Estimation of Parameters for Surface Irrigation Modeling and Management
Mathematical models of the surface irrigation process are becoming standard tools for analyzing the performance of irrigation systems and developing design and operational recommendations. A continuing challenge to the practical use of these tools is the difficulty in characterizing required model ...
Item Parameter Estimation for the Continuous Response Model via an EM Algorithm.
Wang, Tianyou; Zeng, Lingjia
F. Samejima (1973) proposed a continuous response model in which item response is on a continuous scale rather than some discrete levels. This model has potential because in many psychological and educational assessments, the responses are on a conceptual continuum rather than on some fixed levels. As a first step toward studying the applicability…
Estimating long-term volatility parameters for market-consistent models
African Journals Online (AJOL)
Contemporary actuarial and accounting practices (APN 110 in the South African context) require the use of market-consistent models for the valuation of embedded investment derivatives. These models have to be calibrated with accurate and up-to-date market data. Arguably, the most important variable in the valuation of ...
Parameter estimation for mathematical models of a nongastric H+(Na+)-K(+)(NH4+)-ATPase.
Nadal-Quirós, Mónica; Moore, Leon C; Marcano, Mariano
2015-09-01
The role of nongastric H(+)-K(+)-ATPase (HKA) in ion homeostasis of macula densa (MD) cells is an open question. To begin to explore this issue, we developed two mathematical models that describe ion fluxes through a nongastric HKA. One model assumes a 1H(+):1K(+)-per-ATP stoichiometry; the other assumes a 2H(+):2K(+)-per-ATP stoichiometry. Both models include Na+ and NH4+ competitive binding with H+ and K+, respectively, a characteristic observed in vitro and in situ. Model rate constants were obtained by minimizing the distance between model and experimental outcomes. Both 1H(+)(1Na(+)):1K(+)(1NH4 (+))-per-ATP and 2H(+)(2Na(+)):2K(+)(2NH4 (+))-per-ATP models fit the experimental data well. Using both models, we simulated ion net fluxes as a function of cytosolic or luminal ion concentrations typical for the cortical thick ascending limb and MD region. We observed that (1) K+ and NH4+ flowed in the lumen-to-cytosol direction, (2) there was competitive behavior between luminal K+ and NH4+ and between cytosolic Na+ and H+, 3) ion fluxes were highly sensitive to changes in cytosolic Na+ or H+ concentrations, and 4) the transporter does mostly Na+ / K+ exchange under physiological conditions. These results support the concept that nongastric HKA may contribute to Na+ and pH homeostasis in MD cells. Furthermore, in both models, H+ flux reversed at a luminal pH that was <5.6. Such reversal led to Na+ / H+ exchange for a luminal pH of <2 and 4 in the 1:1-per-ATP and 2:2-per-ATP models, respectively. This suggests a novel role of nongastric HKA in cell Na+ homeostasis in the more acidic regions of the renal tubules. Copyright © 2015 the American Physiological Society.
Directory of Open Access Journals (Sweden)
Douglas A. Fynan
2016-06-01
Full Text Available The Gaussian process model (GPM is a flexible surrogate model that can be used for nonparametric regression for multivariate problems. A unique feature of the GPM is that a prediction variance is automatically provided with the regression function. In this paper, we estimate the safety margin of a nuclear power plant by performing regression on the output of best-estimate simulations of a large-break loss-of-coolant accident with sampling of safety system configuration, sequence timing, technical specifications, and thermal hydraulic parameter uncertainties. The key aspect of our approach is that the GPM regression is only performed on the dominant input variables, the safety injection flow rate and the delay time for AC powered pumps to start representing sequence timing uncertainty, providing a predictive model for the peak clad temperature during a reflood phase. Other uncertainties are interpreted as contributors to the measurement noise of the code output and are implicitly treated in the GPM in the noise variance term, providing local uncertainty bounds for the peak clad temperature. We discuss the applicability of the foregoing method to reduce the use of conservative assumptions in best estimate plus uncertainty (BEPU and Level 1 probabilistic safety assessment (PSA success criteria definitions while dealing with a large number of uncertainties.
Directory of Open Access Journals (Sweden)
Maria Gabriela Campolina Diniz Peixoto
2014-05-01
Full Text Available The objective of this work was to compare random regression models for the estimation of genetic parameters for Guzerat milk production, using orthogonal Legendre polynomials. Records (20,524 of test-day milk yield (TDMY from 2,816 first-lactation Guzerat cows were used. TDMY grouped into 10-monthly classes were analyzed for additive genetic effect and for environmental and residual permanent effects (random effects, whereas the contemporary group, calving age (linear and quadratic effects and mean lactation curve were analized as fixed effects. Trajectories for the additive genetic and permanent environmental effects were modeled by means of a covariance function employing orthogonal Legendre polynomials ranging from the second to the fifth order. Residual variances were considered in one, four, six, or ten variance classes. The best model had six residual variance classes. The heritability estimates for the TDMY records varied from 0.19 to 0.32. The random regression model that used a second-order Legendre polynomial for the additive genetic effect, and a fifth-order polynomial for the permanent environmental effect is adequate for comparison by the main employed criteria. The model with a second-order Legendre polynomial for the additive genetic effect, and that with a fourth-order for the permanent environmental effect could also be employed in these analyses.
Musa, Z. N.; Popescu, I.; Mynett, A.
2015-09-01
Hydrological data collection requires deployment of physical infrastructure like rain gauges, water level gauges, as well as use of expensive equipment like echo sounders. Many countries around the world have recorded a decrease in deployment of physical infrastructure for hydrological measurements; developing countries especially have less of this infrastructure and, where it exists, it is poorly maintained. Satellite remote sensing can bridge this gap, and has been applied by hydrologists over the years, with the earliest applications in water body and flood mapping. With the availability of more optical satellites with relatively low temporal resolutions globally, satellite data are commonly used for mapping of water bodies, testing of inundation models, precipitation monitoring, and mapping of flood extent. Use of satellite data to estimate hydrological parameters continues to increase due to use of better sensors, improvement in knowledge of and utilization of satellite data, and expansion of research topics. A review of applications of satellite remote sensing in surface water modelling, mapping and parameter estimation is presented, and its limitations for surface water applications are also discussed.
Pyrolysis of waste tires: A modeling and parameter estimation study using Aspen Plus®.
Ismail, Hamza Y; Abbas, Ali; Azizi, Fouad; Zeaiter, Joseph
2017-02-01
This paper presents a simulation flowsheet model of a waste tire pyrolysis process with feed capacity of 150kg/h. A kinetic rate-based reaction model is formulated in a form implementable in the simulation package Aspen Plus, giving the flowsheet model the capability to predict more than 110 tire pyrolysis products as reported in experiments by Laresgoiti et al. (2004) and Williams (2013) for the oil and gas products respectively. The simulation model is successfully validated in two stages: firstly against experimental data from Olazar et al. (2008) by comparing the mass fractions for the oil products (gas, liquids (non-aromatics), aromatics, and tar) at temperatures of 425, 500, 550 and 610°C, and secondly against experimental results of main hydrocarbon products (C 7 to C 15 ) obtained by Laresgoiti et al. (2004) at temperatures of 400, 500, 600, and 700°C. The model was then used to analyze the effect of pyrolysis process temperature and showed that increased temperatures led to chain fractions from C 10 and higher to decrease while smaller chains increased; this is attributed to the extensive cracking of the larger hydrocarbon chains at higher temperatures. The utility of the flowsheet model was highlighted through an energy analysis that targeted power efficiency of the process determined through production profiles of gasoline and diesel at various temperatures. This shows, through the summation of the net power gain from the plant for gasoline plus diesel that the maximum net power lies at the lower temperatures corresponding to minimum production of gasoline and maximum production of diesel. This simulation model can thus serve as a robust tool to respond to market conditions that dictate fuel demand and prices while at the same time identifying optimum process conditions (e.g. temperature) driven by process economics. Copyright © 2016 Elsevier Ltd. All rights reserved.
Statistics of Parameter Estimates: A Concrete Example
Aguilar, Oscar
2015-01-01
© 2015 Society for Industrial and Applied Mathematics. Most mathematical models include parameters that need to be determined from measurements. The estimated values of these parameters and their uncertainties depend on assumptions made about noise levels, models, or prior knowledge. But what can we say about the validity of such estimates, and the influence of these assumptions? This paper is concerned with methods to address these questions, and for didactic purposes it is written in the context of a concrete nonlinear parameter estimation problem. We will use the results of a physical experiment conducted by Allmaras et al. at Texas A&M University [M. Allmaras et al., SIAM Rev., 55 (2013), pp. 149-167] to illustrate the importance of validation procedures for statistical parameter estimation. We describe statistical methods and data analysis tools to check the choices of likelihood and prior distributions, and provide examples of how to compare Bayesian results with those obtained by non-Bayesian methods based on different types of assumptions. We explain how different statistical methods can be used in complementary ways to improve the understanding of parameter estimates and their uncertainties.
Ames, D. P.; Osorio-Murillo, C.; Over, M. W.; Rubin, Y.
2012-12-01
The Method of Anchored Distributions (MAD) is an inverse modeling technique that is well-suited for estimation of spatially varying parameter fields using limited observations and Bayesian methods. This presentation will discuss the design, development, and testing of a free software implementation of the MAD technique using the open source DotSpatial geographic information system (GIS) framework, R statistical software, and the MODFLOW groundwater model. This new tool, dubbed MAD-GIS, is built using a modular architecture that supports the integration of external analytical tools and models for key computational processes including a forward model (e.g. MODFLOW, HYDRUS) and geostatistical analysis (e.g. R, GSLIB). The GIS-based graphical user interface provides a relatively simple way for new users of the technique to prepare the spatial domain, to identify observation and anchor points, to perform the MAD analysis using a selected forward model, and to view results. MAD-GIS uses the Managed Extensibility Framework (MEF) provided by the Microsoft .NET programming platform to support integration of different modeling and analytical tools at run-time through a custom "driver." Each driver establishes a connection with external programs through a programming interface, which provides the elements for communicating with core MAD software. This presentation gives an example of adapting the MODFLOW to serve as the external forward model in MAD-GIS for inferring the distribution functions of key MODFLOW parameters. Additional drivers for other models are being developed and it is expected that the open source nature of the project will engender the development of additional model drivers by 3rd party scientists.
El Gharamti, Mohamad
2014-09-01
Reactive contaminant transport models are used by hydrologists to simulate and study the migration and fate of industrial waste in subsurface aquifers. Accurate transport modeling of such waste requires clear understanding of the system\\'s parameters, such as sorption and biodegradation. In this study, we present an efficient sequential data assimilation scheme that computes accurate estimates of aquifer contamination and spatially variable sorption coefficients. This assimilation scheme is based on a hybrid formulation of the ensemble Kalman filter (EnKF) and optimal interpolation (OI) in which solute concentration measurements are assimilated via a recursive dual estimation of sorption coefficients and contaminant state variables. This hybrid EnKF-OI scheme is used to mitigate background covariance limitations due to ensemble under-sampling and neglected model errors. Numerical experiments are conducted with a two-dimensional synthetic aquifer in which cobalt-60, a radioactive contaminant, is leached in a saturated heterogeneous clayey sandstone zone. Assimilation experiments are investigated under different settings and sources of model and observational errors. Simulation results demonstrate that the proposed hybrid EnKF-OI scheme successfully recovers both the contaminant and the sorption rate and reduces their uncertainties. Sensitivity analyses also suggest that the adaptive hybrid scheme remains effective with small ensembles, allowing to reduce the ensemble size by up to 80% with respect to the standard EnKF scheme. © 2014 Elsevier Ltd.
Directory of Open Access Journals (Sweden)
Liran Carmel
2010-01-01
Full Text Available Evolutionary binary characters are features of species or genes, indicating the absence (value zero or presence (value one of some property. Examples include eukaryotic gene architecture (the presence or absence of an intron in a particular locus, gene content, and morphological characters. In many studies, the acquisition of such binary characters is assumed to represent a rare evolutionary event, and consequently, their evolution is analyzed using various flavors of parsimony. However, when gain and loss of the character are not rare enough, a probabilistic analysis becomes essential. Here, we present a comprehensive probabilistic model to describe the evolution of binary characters on a bifurcating phylogenetic tree. A fast software tool, EREM, is provided, using maximum likelihood to estimate the parameters of the model and to reconstruct ancestral states (presence and absence in internal nodes and events (gain and loss events along branches.
Carmel, Liran; Wolf, Yuri I; Rogozin, Igor B; Koonin, Eugene V
2010-01-01
Evolutionary binary characters are features of species or genes, indicating the absence (value zero) or presence (value one) of some property. Examples include eukaryotic gene architecture (the presence or absence of an intron in a particular locus), gene content, and morphological characters. In many studies, the acquisition of such binary characters is assumed to represent a rare evolutionary event, and consequently, their evolution is analyzed using various flavors of parsimony. However, when gain and loss of the character are not rare enough, a probabilistic analysis becomes essential. Here, we present a comprehensive probabilistic model to describe the evolution of binary characters on a bifurcating phylogenetic tree. A fast software tool, EREM, is provided, using maximum likelihood to estimate the parameters of the model and to reconstruct ancestral states (presence and absence in internal nodes) and events (gain and loss events along branches).
Uncertainty in Model parameter Estimates and Impacts on Risk and Decision Making in the Subsurface
Enzenhöfer, R.; Helmig, R.; Nowak, W.; Binning, P. J.
2010-12-01
Advection-based well-head protection zones are commonly used to manage the risk of contamination to drinking water wells. Current Water Safety Plans recommend that catchment managers and stakeholders control and monitor all possible hazards within catchments. In order to do this it is important to not only map the protection zones, but also to characterize their uncertainty. Here the four intrinsic well vulnerability criteria of Frind et al. (2006) are cast in a probabilistic framework. The criteria employ advective-dispersive transport models to determine the: (1) Peak arrival time at the well, (2) peak concentration level, (3) arrival time of threshold concentrations and (4) time of exposure. Our probabilistic framework yields catchment-wide maps of the probability of exceeding each of these criteria. We separate the uncertainty of plume location and actual dilution by resolving heterogeneity with high-resolution Monte-Carlo simulations. To keep computational costs low, we adopt a reverse transport formulation, and combine it with the temporal moment approach for model reduction. We recover the time-dependent breakthrough curves and well vulnerability criteria from the temporal moments by Maximum Entropy reconstruction in log-time. Our method is independent of dimensionality, boundary conditions and other sources of uncertainty. It can be coupled with any method for conditioning on available data. For simplicity, we demonstrate the concept on a 2D example that involves synthetic data. Our approach delivers indispensable information on exposure risk and improves the basis for risk-informed well head management.
Energy Technology Data Exchange (ETDEWEB)
Mahowald, Natalie [Cornell Univ., Ithaca, NY (United States)
2016-11-29
Soils in natural and managed ecosystems and wetlands are well known sources of methane, nitrous oxides, and reactive nitrogen gases, but the magnitudes of gas flux to the atmosphere are still poorly constrained. Thus, the reasons for the large increases in atmospheric concentrations of methane and nitrous oxide since the preindustrial time period are not well understood. The low atmospheric concentrations of methane and nitrous oxide, despite being more potent greenhouse gases than carbon dioxide, complicate empirical studies to provide explanations. In addition to climate concerns, the emissions of reactive nitrogen gases from soils are important to the changing nitrogen balance in the earth system, subject to human management, and may change substantially in the future. Thus improved modeling of the emission fluxes of these species from the land surface is important. Currently, there are emission modules for methane and some nitrogen species in the Community Earth System Model’s Community Land Model (CLM-ME/N); however, there are large uncertainties and problems in the simulations, resulting in coarse estimates. In this proposal, we seek to improve these emission modules by combining state-of-the-art process modules for emissions, available data, and new optimization methods. In earth science problems, we often have substantial data and knowledge of processes in disparate systems, and thus we need to combine data and a general process level understanding into a model for projections of future climate that are as accurate as possible. The best methodologies for optimization of parameters in earth system models are still being developed. In this proposal we will develop and apply surrogate algorithms that a) were especially developed for computationally expensive simulations like CLM-ME/N models; b) were (in the earlier surrogate optimization Stochastic RBF) demonstrated to perform very well on computationally expensive complex partial differential equations in
Khayet, Mohamed; Fernández, Victoria
2012-11-14
Most aerial plant parts are covered with a hydrophobic lipid-rich cuticle, which is the interface between the plant organs and the surrounding environment. Plant surfaces may have a high degree of hydrophobicity because of the combined effects of surface chemistry and roughness. The physical and chemical complexity of the plant cuticle limits the development of models that explain its internal structure and interactions with surface-applied agrochemicals. In this article we introduce a thermodynamic method for estimating the solubilities of model plant surface constituents and relating them to the effects of agrochemicals. Following the van Krevelen and Hoftyzer method, we calculated the solubility parameters of three model plant species and eight compounds that differ in hydrophobicity and polarity. In addition, intact tissues were examined by scanning electron microscopy and the surface free energy, polarity, solubility parameter and work of adhesion of each were calculated from contact angle measurements of three liquids with different polarities. By comparing the affinities between plant surface constituents and agrochemicals derived from (a) theoretical calculations and (b) contact angle measurements we were able to distinguish the physical effect of surface roughness from the effect of the chemical nature of the epicuticular waxes. A solubility parameter model for plant surfaces is proposed on the basis of an increasing gradient from the cuticular surface towards the underlying cell wall. The procedure enabled us to predict the interactions among agrochemicals, plant surfaces, and cuticular and cell wall components, and promises to be a useful tool for improving our understanding of biological surface interactions.
2012-01-01
Background Most aerial plant parts are covered with a hydrophobic lipid-rich cuticle, which is the interface between the plant organs and the surrounding environment. Plant surfaces may have a high degree of hydrophobicity because of the combined effects of surface chemistry and roughness. The physical and chemical complexity of the plant cuticle limits the development of models that explain its internal structure and interactions with surface-applied agrochemicals. In this article we introduce a thermodynamic method for estimating the solubilities of model plant surface constituents and relating them to the effects of agrochemicals. Results Following the van Krevelen and Hoftyzer method, we calculated the solubility parameters of three model plant species and eight compounds that differ in hydrophobicity and polarity. In addition, intact tissues were examined by scanning electron microscopy and the surface free energy, polarity, solubility parameter and work of adhesion of each were calculated from contact angle measurements of three liquids with different polarities. By comparing the affinities between plant surface constituents and agrochemicals derived from (a) theoretical calculations and (b) contact angle measurements we were able to distinguish the physical effect of surface roughness from the effect of the chemical nature of the epicuticular waxes. A solubility parameter model for plant surfaces is proposed on the basis of an increasing gradient from the cuticular surface towards the underlying cell wall. Conclusions The procedure enabled us to predict the interactions among agrochemicals, plant surfaces, and cuticular and cell wall components, and promises to be a useful tool for improving our understanding of biological surface interactions. PMID:23151272
Superconducting high current magnetic Circuit: Design and Parameter Estimation of a Simulation Model
Kiefer, Alexander; Reich, Werner Dr
The Large Hadron Collider (LHC) utilizes superconducting main dipole magnets that bend the trajectory of the particle beams. In order to adjust the not completely homogeneous magnetic feld of the main dipole magnets, amongst others, sextupole correctcorrector magnets are used. In one of the 16 corrector magnet circuits placed in the LHC, 154 of these sextupole corrector magnets (MCS) are connected in series. This circuit extends on a 3.35 km tunnel section of the LHC. In 2015, at one of the 16 circuits a fault was detected. The simulation of this circuit is helpful for fnding the fault by applying alternating current at different frequencies. Within this Thesis a PSpice model for the simulation of the superconducting corrector magnet circuit was designed. The physical properties of the circuit and its elements were analyzed and implemented. For the magnets and bus-bars, sub-circuits were created which reflect the parasitic effects of electrodynamics and electrostats. The inductance values and capacitance valu...
Estimates of source parameters of M4.9 Kharsali earthquake using waveform modelling
Paul, Ajay; Kumar, Naresh
2010-10-01
This paper presents the computation of time series of the 22 July 2007 M 4.9 Kharsali earthquake. It occurred close to the Main Central Thrust (MCT) where seismic gap exists. The main shock and 17 aftershocks were located by closely spaced eleven seismograph stations in a network that involved VSAT based real-time seismic monitoring. The largest aftershock of M 3.5 and other aftershocks occurred within a small volume of 4 × 4 km horizontal extent and between depths of 10 and 14 km. The values of seismic moment ( M ∘) determined using P-wave spectra and Brune's model based on f 2 spectral shape ranges from 1018 to 1023 dyne-cm. The initial aftershocks occurred at greater depth compared to the later aftershocks. The time series of ground motion have been computed for recording sites using geometric ray theory and Green's function approach. The method for computing time series consists in integrating the far-field contributions of Green's function for a number of distributed point source. The generated waveforms have been compared with the observed ones. It has been inferred that the Kharsali earthquake occurred due to a northerly dipping low angle thrust fault at a depth of 14 km taking strike N279°E, dip 14° and rake 117°. There are two regions on the fault surface which have larger slip amplitudes (asperities) and the rupture which has been considered as circular in nature initiated from the asperity at a greater depth shifting gradually upwards. The two asperities cover only 10% of the total area of the causative fault plane. However, detailed seismic imaging of these two asperities can be corroborated with structural heterogeneities associated with causative fault to understand how seismogenesis is influenced by strong or weak structural barriers in the region.
Belykh, Evgenii; Krutko, Alexander V; Baykov, Evgenii S; Giers, Morgan B; Preul, Mark C; Byvaltsev, Vadim A
2017-03-01
Recurrence of lumbar disc herniation (rLDH) is one of the unfavorable outcomes after microdiscectomy. Prediction of the patient population with increased risk of rLDH is important because patients may benefit from preventive measures or other surgical options. The study assessed preoperative factors associated with rLDH after microdiscectomy and created a mathematical model for estimation of chances for rLDH. This is a retrospective case-control study. The study includes patients who underwent microdiscectomy for LDH. Lumbar disc herniation recurrence was determined using magnetic resonance imaging. The study included 350 patients with LDH and a minimum of 3 years of follow-up. Patients underwent microdiscectomy for LDH at the L4-L5 and L5-S1 levels from 2008 to 2012. Patients were divided into two groups to identify predictors of recurrence: those who developed rLDH (n=50) within 3 years and those who did not develop rLDH (n=300) within the same follow-up period. Multivariate analysis was performed using patient baseline clinical and radiography data. Non-linear, multivariate, logistic regression analysis was used to build a predictive model. Recurrence of LDH occurred within 1 to 48 months after microdiscectomy. Preoperatively, patients who developed rLDH were smokers (70% vs. 27%, pnon-linear modeling allowed for more accurate prediction of rLDH (90% correct prediction of rLDH; 99% correct prediction of no rLDH) than other univariate logit models. Preoperative radiographic parameters in patients with LDH can be used to assess the risk of recurrence after microdiscectomy. The multifactorial non-linear model provided more accurate rLDH probability estimation than the univariate analyses. The software developed from this model may be implemented during patient counseling or decision making when choosing the type of primary surgery for LDH. Copyright © 2016 Elsevier Inc. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Ibsen, Lars Bo; Liingaard, M.
2006-12-15
A lumped-parameter model represents the frequency dependent soil-structure interaction of a massless foundation placed on or embedded into an unbounded soil domain. In this technical report the steps of establishing a lumped-parameter model are presented. Following sections are included in this report: Static and dynamic formulation, Simple lumped-parameter models and Advanced lumped-parameter models. (au)
Wang, Daosheng; Zhang, Jicai; He, Xianqiang; Chu, Dongdong; Lv, Xianqing; Wang, Ya Ping; Yang, Yang; Fan, Daidu; Gao, Shu
2018-01-01
Model parameters in the suspended cohesive sediment transport models are critical for the accurate simulation of suspended sediment concentrations (SSCs). Difficulties in estimating the model parameters still prevent numerical modeling of the sediment transport from achieving a high level of predictability. Based on a three-dimensional cohesive sediment transport model and its adjoint model, the satellite remote sensing data of SSCs during both spring tide and neap tide, retrieved from Geostationary Ocean Color Imager (GOCI), are assimilated to synchronously estimate four spatially and temporally varying parameters in the Hangzhou Bay in China, including settling velocity, resuspension rate, inflow open boundary conditions and initial conditions. After data assimilation, the model performance is significantly improved. Through several sensitivity experiments, the spatial and temporal variation tendencies of the estimated model parameters are verified to be robust and not affected by model settings. The pattern for the variations of the estimated parameters is analyzed and summarized. The temporal variations and spatial distributions of the estimated settling velocity are negatively correlated with current speed, which can be explained using the combination of flocculation process and Stokes' law. The temporal variations and spatial distributions of the estimated resuspension rate are also negatively correlated with current speed, which are related to the grain size of the seabed sediments under different current velocities. Besides, the estimated inflow open boundary conditions reach the local maximum values near the low water slack conditions and the estimated initial conditions are negatively correlated with water depth, which is consistent with the general understanding. The relationships between the estimated parameters and the hydrodynamic fields can be suggestive for improving the parameterization in cohesive sediment transport models.
Directory of Open Access Journals (Sweden)
Nita Suhartini
2010-06-01
Full Text Available A study of soil erosion rates had been done on a slightly and long slope of cultivated area in Ciawi - Bogor, using 137Cs technique. The objective of the present study was to evaluate the applicability of the 137Cs technique in obtaining spatially distributed information of soil redistribution at small catchment. This paper reports the result of the choice of conversion model for erosion rate estimates and the sensitive of the changes in the model parameter. For this purpose, small site was selected, namely landuse I (LU-I. The top of a slope was chosen as a reference site. The erosion/deposit rate of individual sampling points was estimated using the conversion models, namely Proportional Model (PM, Mass Balance Model 1 (MBM1 and Mass Balance Model 2 (MBM2. A comparison of the conversion models showed that the lowest value is obtained by the PM. The MBM1 gave values closer to MBM2, but MBM2 gave a reliable values. In this study, a sensitivity analysis suggest that the conversion models are sensitive to changes in parameters that depend on the site conditions, but insensitive to changes in parameters that interact to the onset of 137Cs fallout input. Keywords: soil erosion, environmental radioisotope, cesium
Directory of Open Access Journals (Sweden)
Dario Constantinescu
2016-12-01
Full Text Available Drought stress is a major abiotic stres threatening plant and crop productivity. In case of fleshy fruits, understanding Drought stress is a major abiotic stress threatening plant and crop productivity. In case of fleshy fruits, understanding mechanisms governing water and carbon accumulations and identifying genes, QTLs and phenotypes, that will enable trade-offs between fruit growth and quality under Water Deficit (WD condition is a crucial challenge for breeders and growers. In the present work, 117 recombinant inbred lines of a population of Solanum lycopersicum were phenotyped under control and WD conditions. Plant water status, fruit growth and composition were measured and data were used to calibrate a process-based model describing water and carbon fluxes in a growing fruit as a function of plant and environment. Eight genotype-dependent model parameters were estimated using a multiobjective evolutionary algorithm in order to minimize the prediction errors of fruit dry and fresh mass throughout fruit development. WD increased the fruit dry matter content (up to 85 % and decreased its fresh weight (up to 60 %, big fruit size genotypes being the most sensitive. The mean normalized root mean squared errors of the predictions ranged between 16-18 % in the population. Variability in model genotypic parameters allowed us to explore diverse genetic strategies in response to WD. An interesting group of genotypes could be discriminated in which i the low loss of fresh mass under WD was associated with high active uptake of sugars and low value of the maximum cell wall extensibility, and ii the high dry matter content in control treatment (C was associated with a slow decrease of mass flow. Using 501 SNP markers genotyped across the genome, a QTL analysis of model parameters allowed to detect three main QTLs related to xylem and phloem conductivities, on chromosomes 2, 4 and 8. The model was then applied to design ideotypes with high dry matter
Directory of Open Access Journals (Sweden)
Peter S. Ojiambo
2017-06-01
Full Text Available Empirical and mechanistic modeling indicate that pathogens transmitted via aerially dispersed inoculum follow a power law, resulting in dispersive epidemic waves. The spread parameter (b of the power law model, which is an indicator of the distance of the epidemic wave front from an initial focus per unit time, has been found to be approximately 2 for several animal and plant diseases over a wide range of spatial scales under conditions favorable for disease spread. Although disease spread and epidemic expansion can be influenced by several factors, the stability of the parameter b over multiple epidemic years has not been determined. Additionally, the size of the initial epidemic area is expected to be strongly related to the final epidemic extent for epidemics, but the stability of this relationship is also not well established. Here, empirical data of cucurbit downy mildew epidemics collected from 2008 to 2014 were analyzed using a spatio-temporal model of disease spread that incorporates logistic growth in time with a power law function for dispersal. Final epidemic extent ranged from 4.16 ×108 km2 in 2012 to 6.44 ×108 km2 in 2009. Current epidemic extent became significantly associated (P < 0.0332; 0.56 < R2 < 0.99 with final epidemic area beginning near the end of April, with the association increasing monotonically to 1.0 by the end of the epidemic season in July. The position of the epidemic wave-front became exponentially more distant with time, and epidemic velocity increased linearly with distance. Slopes from the temporal and spatial regression models varied with about a 2.5-fold range across epidemic years. Estimates of b varied substantially ranging from 1.51 to 4.16 across epidemic years. We observed a significant b ×time (or distance interaction (P < 0.05 for epidemic years where data were well described by the power law model. These results suggest that the spread parameter b may not be stable over multiple epidemic
Directory of Open Access Journals (Sweden)
Niancheng Zhou
2014-08-01
Full Text Available The influence of electric vehicle charging stations on power grid harmonics is becoming increasingly significant as their presence continues to grow. This paper studies the operational principles of the charging current in the continuous and discontinuous modes for a three-phase uncontrolled rectification charger with a passive power factor correction link, which is affected by the charging power. A parameter estimation method is proposed for the equivalent circuit of the charger by using the measured characteristic AC (Alternating Current voltage and current data combined with the charging circuit constraints in the conduction process, and this method is verified using an experimental platform. The sensitivity of the current harmonics to the changes in the parameters is analyzed. An analytical harmonic model of the charging station is created by separating the chargers into groups by type. Then, the harmonic current amplification caused by the shunt active power filter is researched, and the analytical formula for the overload factor is derived to further correct the capacity of the shunt active power filter. Finally, this method is validated through a field test of a charging station.
Directory of Open Access Journals (Sweden)
majid montaseri
2017-03-01
Full Text Available Introduction: A total dissolved solid (TDS is an important indicator for water quality assesment. Since the composition of mineral salts and discharge affects the TDS of water, it is important to understand the relationships of mineral salts composition with TDS. Materials and Methods: In this study, methods of artificial neural networks with five different training algorithm,Levenberg-Marquardt (LM, Scaled Conjugate Gradient (SCG, Fletcher Conjugate Gradient (CGF, One Step Secant (OSS and Gradient descent with adaptive learning rate backpropagation(GDAalgorithm and adaptive Neurofuzzy inference system based on Subtractive Clustering were used to model water quality properties of Zarrineh River Basin, to be developed in total dissolved solids prediction. ANN and ANFIS program code were written in MATLAB language. Here, the ANN with one hidden layer was used and the hidden nodes’ number was determined using trial and error. Different activation functions (logarithm sigmoid, tangent sigmoid and linear were tried for the hidden and output nodes. Therefore, water quality data from seven hydrometer stationswere used during the statistical period of 18years (1993-2010.In this research, the study period was divided into two periods of dry and wet flow, and then in a preliminary statistical analysis, the main parameters affecting the estimation of the TDS are determined and isused for modeling. 75% of data are used for remaining and 25% of the data are used for evaluation of the model, randomly. In this paper, three statistical evaluation criteria, correlation coefficient (R, the root mean square error (RMSE and mean absolute error (MAE were used to assess models’ performances. Results and Discussion: By applying correlation coefficients method between the parameters of water quality and discharge with total dissolved solid in two periods, wet and dry periods, the significant (at 95% level variables entered into the model were Q, HCO3., Cl, So4, Ca
Aihara, ShinIchi; Bagchi, Arunabha; Saha, S.
We consider the problem of estimating stochastic volatility from stock data. The estimation of the volatility process of the Heston model is not in the usual framework of the filtering theory. Discretizing the continuous Heston model to the discrete-time one, we can derive the exact volatility
Reichle, Rolf; Koster, Randal; DeLannoy, Gabrielle; Forman, Barton; Liu, Qing; Mahanama, Sarith; Toure, Ally
2011-01-01
The Modern-Era Retrospective analysis for Research and Applications (MERRA) is a state-of-the-art reanalysis that provides. in addition to atmospheric fields. global estimates of soil moisture, latent heat flux. snow. and runoff for J 979-present. This study introduces a supplemental and improved set of land surface hydrological fields ('MERRA-Land') generated by replaying a revised version of the land component of the MERRA system. Specifically. the MERRA-Land estimates benefit from corrections to the precipitation forcing with the Global Precipitation Climatology Project pentad product (version 2.1) and from revised parameters in the rainfall interception model, changes that effectively correct for known limitations in the MERRA land surface meteorological forcings. The skill (defined as the correlation coefficient of the anomaly time series) in land surface hydrological fields from MERRA and MERRA-Land is assessed here against observations and compared to the skill of the state-of-the-art ERA-Interim reanalysis. MERRA-Land and ERA-Interim root zone soil moisture skills (against in situ observations at 85 US stations) are comparable and significantly greater than that of MERRA. Throughout the northern hemisphere, MERRA and MERRA-Land agree reasonably well with in situ snow depth measurements (from 583 stations) and with snow water equivalent from an independent analysis. Runoff skill (against naturalized stream flow observations from 15 basins in the western US) of MERRA and MERRA-Land is typically higher than that of ERA-Interim. With a few exceptions. the MERRA-Land data appear more accurate than the original MERRA estimates and are thus recommended for those interested in using '\\-tERRA output for land surface hydrological studies.
Directory of Open Access Journals (Sweden)
Jianping Gao
2015-08-01
Full Text Available Accurate state of charge (SoC estimation of batteries plays an important role in promoting the commercialization of electric vehicles. The main work to be done in accurately determining battery SoC can be summarized in three parts. (1 In view of the model-based SoC estimation flow diagram, the n-order resistance-capacitance (RC battery model is proposed and expected to accurately simulate the battery’s major time-variable, nonlinear characteristics. Then, the mathematical equations for model parameter identification and SoC estimation of this model are constructed. (2 The Akaike information criterion is used to determine an optimal tradeoff between battery model complexity and prediction precision for the n-order RC battery model. Results from a comparative analysis show that the first-order RC battery model is thought to be the best based on the Akaike information criterion (AIC values. (3 The real-time joint estimator for the model parameter and SoC is constructed, and the application based on two battery types indicates that the proposed SoC estimator is a closed-loop identification system where the model parameter identification and SoC estimation are corrected mutually, adaptively and simultaneously according to the observer values. The maximum SoC estimation error is less than 1% for both battery types, even against the inaccurate initial SoC.
Yuan, Shifei; Jiang, Lei; Yin, Chengliang; Wu, Hongjie; Zhang, Xi
2017-06-01
The electrochemistry-based battery model can provide physics-meaningful knowledge about the lithium-ion battery system with extensive computation burdens. To motivate the development of reduced order battery model, three major contributions have been made throughout this paper: (1) the transfer function type of simplified electrochemical model is proposed to address the current-voltage relationship with Padé approximation method and modified boundary conditions for electrolyte diffusion equations. The model performance has been verified under pulse charge/discharge and dynamic stress test (DST) profiles with the standard derivation less than 0.021 V and the runtime 50 times faster. (2) the parametric relationship between the equivalent circuit model and simplified electrochemical model has been established, which will enhance the comprehension level of two models with more in-depth physical significance and provide new methods for electrochemical model parameter estimation. (3) four simplified electrochemical model parameters: equivalent resistance Req, effective diffusion coefficient in electrolyte phase Deeff, electrolyte phase volume fraction ε and open circuit voltage (OCV), have been identified by the recursive least square (RLS) algorithm with the modified DST profiles under 45, 25 and 0 °C. The simulation results indicate that the proposed model coupled with RLS algorithm can achieve high accuracy for electrochemical parameter identification in dynamic scenarios.
Varella, H.-V.
2009-04-01
Dynamic crop models are very useful to predict the behavior of crops in their environment and are widely used in a lot of agro-environmental work. These models have many parameters and their spatial application require a good knowledge of these parameters, especially of the soil parameters. These parameters can be estimated from soil analysis at different points but this is very costly and requires a lot of experimental work. Nevertheless, observations on crops provided by new techniques like remote sensing or yield monitoring, is a possibility for estimating soil parameters through the inversion of crop models. In this work, the STICS crop model is studied for the wheat and the sugar beet and it includes more than 200 parameters. After a previous work based on a large experimental database for calibrate parameters related to the characteristics of the crop, a global sensitivity analysis of the observed variables (leaf area index LAI and absorbed nitrogen QN provided by remote sensing data, and yield at harvest provided by yield monitoring) to the soil parameters is made, in order to determine which of them have to be estimated. This study was made in different climatic and agronomic conditions and it reveals that 7 soil parameters (4 related to the water and 3 related to the nitrogen) have a clearly influence on the variance of the observed variables and have to be therefore estimated. For estimating these 7 soil parameters, a Bayesian data assimilation method is chosen (because of available prior information on these parameters) named Importance Sampling by using observations, on wheat and sugar beet crop, of LAI and QN at various dates and yield at harvest acquired on different climatic and agronomic conditions. The quality of parameter estimation is then determined by comparing the result of parameter estimation with only prior information and the result with the posterior information provided by the Bayesian data assimilation method. The result of the
Directory of Open Access Journals (Sweden)
Fei Feng
2015-04-01
Full Text Available This study describes an online estimation of the model parameters and state of charge (SOC of lithium iron phosphate batteries in electric vehicles. A widely used SOC estimator is based on the dynamic battery model with predeterminate parameters. However, model parameter variances that follow with their varied operation temperatures can result in errors in estimating battery SOC. To address this problem, a battery online parameter estimator is presented based on an equivalent circuit model using an adaptive joint extended Kalman filter algorithm. Simulations based on actual data are established to verify accuracy and stability in the regression of model parameters. Experiments are also performed to prove that the proposed estimator exhibits good reliability and adaptability under different loading profiles with various temperatures. In addition, open-circuit voltage (OCV is used to estimate SOC in the proposed algorithm. However, the OCV based on the proposed online identification includes a part of concentration polarization and hysteresis, which is defined as parametric identification-based OCV (OCVPI. Considering the temperature factor, a novel OCV–SOC relationship map is established by using OCVPI under various temperatures. Finally, a validating experiment is conducted based on the consecutive loading profiles. Results indicate that our method is effective and adaptable when a battery operates at different ambient temperatures.
Crépet, Amélie; Stahl, Valérie; Carlin, Frédéric
2009-05-31
The optimal growth rate mu(opt) of Listeria monocytogenes in minimally processed (MP) fresh leafy salads was estimated with a hierarchical Bayesian model at (mean+/-standard deviation) 0.33+/-0.16 h(-1). This mu(opt) value was much lower on average than that in nutrient broth, liquid dairy, meat and seafood products (0.7-1.3 h(-1)), and of the same order of magnitude as in cheese. Cardinal temperatures T(min), T(opt) and T(max) were determined at -4.5+/-1.3 degrees C, 37.1+/-1.3 degrees C and 45.4+/-1.2 degrees C respectively. These parameters were determined from 206 growth curves of L. monocytogenes in MP fresh leafy salads (lettuce including iceberg lettuce, broad leaf endive, curly leaf endive, lamb's lettuce, and mixtures of them) selected in the scientific literature and in technical reports. The adequacy of the model was evaluated by comparing observed data (bacterial concentrations at each experimental time for the completion of the 206 growth curves, mean log(10) increase at selected times and temperatures, L. monocytogenes concentrations in naturally contaminated MP iceberg lettuce) with the distribution of the predicted data generated by the model. The sensitivity of the model to assumptions about the prior values also was tested. The observed values mostly fell into the 95% credible interval of the distribution of predicted values. The mu(opt) and its uncertainty determined in this work could be used in quantitative microbial risk assessment for L. monocytogenes in minimally processed fresh leafy salads.
Gu, Fengshou; Ball, Andrew; Rao, K K
1996-01-01
Part 2 of this paper presents the experimental and analytical procedures used in the estimation of injection parameters from monitored vibration. The mechanical and flow‐induced sources of vibration in a fuel injector are detailed and the features of the resulting vibration response of the injector body are discussed. Experimental engine test and data acquisition procedures are described, and the use of an out‐of‐the‐engine test facility to confirm injection dependent vibration response is ou...
Nemeth, Christopher; Fearnhead, Paul; Mihaylova, Lyudmila
2013-01-01
Poyiadjis et al. (2011) show how particle methods can be used to estimate both the score and the observed information matrix for state space models. These methods either suffer from a computational cost that is quadratic in the number of particles, or produce estimates whose variance increases quadratically with the amount of data. This paper introduces an alternative approach for estimating these terms at a computational cost that is linear in the number of particles. The method is derived u...
Cosmological parameter estimation using particle swarm optimization
Prasad, Jayanti; Souradeep, Tarun
2012-06-01
Constraining theoretical models, which are represented by a set of parameters, using observational data is an important exercise in cosmology. In Bayesian framework this is done by finding the probability distribution of parameters which best fits to the observational data using sampling based methods like Markov chain Monte Carlo (MCMC). It has been argued that MCMC may not be the best option in certain problems in which the target function (likelihood) poses local maxima or have very high dimensionality. Apart from this, there may be examples in which we are mainly interested to find the point in the parameter space at which the probability distribution has the largest value. In this situation the problem of parameter estimation becomes an optimization problem. In the present work we show that particle swarm optimization (PSO), which is an artificial intelligence inspired population based search procedure, can also be used for cosmological parameter estimation. Using PSO we were able to recover the best-fit Λ cold dark matter (LCDM) model parameters from the WMAP seven year data without using any prior guess value or any other property of the probability distribution of parameters like standard deviation, as is common in MCMC. We also report the results of an exercise in which we consider a binned primordial power spectrum (to increase the dimensionality of problem) and find that a power spectrum with features gives lower chi square than the standard power law. Since PSO does not sample the likelihood surface in a fair way, we follow a fitting procedure to find the spread of likelihood function around the best-fit point.
Parameter Estimation in Continuous Time Domain
Directory of Open Access Journals (Sweden)
Gabriela M. ATANASIU
2016-12-01
Full Text Available This paper will aim to presents the applications of a continuous-time parameter estimation method for estimating structural parameters of a real bridge structure. For the purpose of illustrating this method two case studies of a bridge pile located in a highly seismic risk area are considered, for which the structural parameters for the mass, damping and stiffness are estimated. The estimation process is followed by the validation of the analytical results and comparison with them to the measurement data. Further benefits and applications for the continuous-time parameter estimation method in civil engineering are presented in the final part of this paper.
International Nuclear Information System (INIS)
Zhang, Song; Rajamani, Rajesh
2016-01-01
This paper develops analytical sensing principles for estimation of circumferential size of a cylindrical surface using magnetic sensors. An electromagnet and magnetic sensors are used on a wearable band for measurement of leg size. In order to enable robust size estimation during rough real-world use of the wearable band, three estimation algorithms are developed based on models of the magnetic field variation over a cylindrical surface. The magnetic field models developed include those for a dipole and for a uniformly magnetized cylinder. The estimation algorithms used include a linear regression equation, an extended Kalman filter and an unscented Kalman filter. Experimental laboratory tests show that the size sensor in general performs accurately, yielding sub-millimeter estimation errors. The unscented Kalman filter yields the best performance that is robust to bias and misalignment errors. The size sensor developed herein can be used for monitoring swelling due to fluid accumulation in the lower leg and a number of other biomedical applications. (paper)
Parameter estimation of an aeroelastic aircraft using neural networks
Indian Academy of Sciences (India)
https://www.ias.ac.in/article/fulltext/sadh/025/02/0181-0191. Keywords. Parameter estimation; modelling; aeroelastic aircraft; neural networks; system identification. Abstract. Application of neural networks to the problem of aerodynamic modelling and parameter estimation for aeroelastic aircraft is addressed. A neural model ...
Multi-objective optimization in quantum parameter estimation
Gong, BeiLi; Cui, Wei
2018-04-01
We investigate quantum parameter estimation based on linear and Kerr-type nonlinear controls in an open quantum system, and consider the dissipation rate as an unknown parameter. We show that while the precision of parameter estimation is improved, it usually introduces a significant deformation to the system state. Moreover, we propose a multi-objective model to optimize the two conflicting objectives: (1) maximizing the Fisher information, improving the parameter estimation precision, and (2) minimizing the deformation of the system state, which maintains its fidelity. Finally, simulations of a simplified ɛ-constrained model demonstrate the feasibility of the Hamiltonian control in improving the precision of the quantum parameter estimation.
Shafei, Babak; Schmid, Martin; Müller, Beat; Chwalek, Thomas
2014-05-01
Sediment diagenesis can significantly impact on lake water quality through depleting hypolimnion oxygen and acting as a sink or source of nutrients and contaminants. In this study, we apply MATsedLAB, a sediment diagenesis module developed in MATLAB [1, 2] to quantify benthic oxygen consumption and biogeochemical cycling of phosphate (P) in lacustrine sediments of Lake Baldegg, located in central Switzerland. MATsedLAB provides an access to the advanced computational and visualization capabilities of the interactive programming environment of MATLAB. It allows for a flexible definition of non steady-state boundary conditions at the sediment-water interface (SWI), the model parameters as well as transport and biogeochemical reactions. The model has been extended to facilitate the model-independent parameter estimation and uncertainty analysis using the software package, PEST. Lake Baldegg represents an interesting case where sediment-water interactions control P loading in an eutrophic lake. It is of 5.2 km2 surface area and has been artificially aerated since 1982. Between 1960 and 1980, low oxygen concentrations and meromictic condition were established as a result of high productivity. Here, we use the cores for the measurements of anions and cations which were collected in April and June 2012 respectively from the deepest location (66 m), by Torres et al. (2013) to calibrate the developed model [3]. Depth profiles of thirty three species were simulated by including thirty mixed kinetic-equilibrium biogeochemical processes as well as imposing the fluxes of organic and inorganic matters along with solute concentrations at the SWI as dynamic boundary conditions. The diffusive transport in the boundary layer (DBL) above the SWI was included as the supply of O2 to the sediment surface can be diffusion-limited, and applying a constant O2 concentration at the sediment surface may overestimate O2 consumption. Benthic oxygen consumption was calculated as a function of
DEFF Research Database (Denmark)
Vang, Jakob Rabjerg; Zhou, Fan; Andreasen, Søren Juhl
2015-01-01
A high temperature PEM (HTPEM) fuel cell model capable of simulating both steady state and dynamic operation is presented. The purpose is to enable extraction of unknown parameters from sets of impedance spectra and polarisation curves. The model is fitted to two polarisation curves and four...
Kang, Ling; Zhang, Song
2016-01-01
Heuristic search algorithms, which are characterized by faster convergence rates and can obtain better solutions than the traditional mathematical methods, are extensively used in engineering optimizations. In this paper, a newly developed elitist-mutated particle swarm optimization (EMPSO) technique and an improved gravitational search algorithm (IGSA) are successively applied to parameter estimation problems of Muskingum flood routing models. First, the global optimization performance of the EMPSO and IGSA are validated by nine standard benchmark functions. Then, to further analyse the applicability of the EMPSO and IGSA for various forms of Muskingum models, three typical structures are considered: the basic two-parameter linear Muskingum model (LMM), a three-parameter nonlinear Muskingum model (NLMM) and a four-parameter nonlinear Muskingum model which incorporates the lateral flow (NLMM-L). The problems are formulated as optimization procedures to minimize the sum of the squared deviations (SSQ) or the sum of the absolute deviations (SAD) between the observed and the estimated outflows. Comparative results of the selected numerical cases (Case 1-3) show that the EMPSO and IGSA not only rapidly converge but also obtain the same best optimal parameter vector in every run. The EMPSO and IGSA exhibit superior robustness and provide two efficient alternative approaches that can be confidently employed to estimate the parameters of both linear and nonlinear Muskingum models in engineering applications.
Samsudin, Hayati; Auras, Rafael; Mishra, Dharmendra; Dolan, Kirk; Burgess, Gary; Rubino, Maria; Selke, Susan; Soto-Valdez, Herlinda
2018-01-01
Migration studies of chemicals from contact materials have been widely conducted due to their importance in determining the safety and shelf life of a food product in their packages. The US Food and Drug Administration (FDA) and the European Food Safety Authority (EFSA) require this safety assessment for food contact materials. So, migration experiments are theoretically designed and experimentally conducted to obtain data that can be used to assess the kinetics of chemical release. In this work, a parameter estimation approach was used to review and to determine the mass transfer partition and diffusion coefficients governing the migration process of eight antioxidants from poly(lactic acid), PLA, based films into water/ethanol solutions at temperatures between 20 and 50°C. Scaled sensitivity coefficients were calculated to assess simultaneously estimation of a number of mass transfer parameters. An optimal experimental design approach was performed to show the importance of properly designing a migration experiment. Additional parameters also provide better insights on migration of the antioxidants. For example, the partition coefficients could be better estimated using data from the early part of the experiment instead at the end. Experiments could be conducted for shorter periods of time saving time and resources. Diffusion coefficients of the eight antioxidants from PLA films were between 0.2 and 19×10 -14 m 2 /s at ~40°C. The use of parameter estimation approach provided additional and useful insights about the migration of antioxidants from PLA films. Copyright © 2017 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Nele Goeyvaerts
2015-12-01
Full Text Available Dynamic transmission models are essential to design and evaluate control strategies for airborne infections. Our objective was to develop a dynamic transmission model for seasonal influenza allowing to evaluate the impact of vaccinating specific age groups on the incidence of infection, disease and mortality. Projections based on such models heavily rely on assumed ‘input’ parameter values. In previous seasonal influenza models, these parameter values were commonly chosen ad hoc, ignoring between-season variability and without formal model validation or sensitivity analyses. We propose to directly estimate the parameters by fitting the model to age-specific influenza-like illness (ILI incidence data over multiple influenza seasons. We used a weighted least squares (WLS criterion to assess model fit and applied our method to Belgian ILI data over six influenza seasons. After exploring parameter importance using symbolic regression, we evaluated a set of candidate models of differing complexity according to the number of season-specific parameters. The transmission parameters (average R0, seasonal amplitude and timing of the seasonal peak, waning rates and the scale factor used for WLS optimization, influenced the fit to the observed ILI incidence the most. Our results demonstrate the importance of between-season variability in influenza transmission and our estimates are in line with the classification of influenza seasons according to intensity and vaccine matching.
Sanz, E.; Voss, C.I.
2006-01-01
Inverse modeling studies employing data collected from the classic Henry seawater intrusion problem give insight into several important aspects of inverse modeling of seawater intrusion problems and effective measurement strategies for estimation of parameters for seawater intrusion. Despite the simplicity of the Henry problem, it embodies the behavior of a typical seawater intrusion situation in a single aquifer. Data collected from the numerical problem solution are employed without added noise in order to focus on the aspects of inverse modeling strategies dictated by the physics of variable-density flow and solute transport during seawater intrusion. Covariances of model parameters that can be estimated are strongly dependent on the physics. The insights gained from this type of analysis may be directly applied to field problems in the presence of data errors, using standard inverse modeling approaches to deal with uncertainty in data. Covariance analysis of the Henry problem indicates that in order to generally reduce variance of parameter estimates, the ideal places to measure pressure are as far away from the coast as possible, at any depth, and the ideal places to measure concentration are near the bottom of the aquifer between the center of the transition zone and its inland fringe. These observations are located in and near high-sensitivity regions of system parameters, which may be identified in a sensitivity analysis with respect to several parameters. However, both the form of error distribution in the observations and the observation weights impact the spatial sensitivity distributions, and different choices for error distributions or weights can result in significantly different regions of high sensitivity. Thus, in order to design effective sampling networks, the error form and weights must be carefully considered. For the Henry problem, permeability and freshwater inflow can be estimated with low estimation variance from only pressure or only
Bjerklie, D. M.
2014-12-01
As part of a U. S. Geological Survey effort to (1) estimate river discharge in ungaged basins, (2) understand runoff quantity and timing for watersheds between gaging stations, and (3) estimate potential future streamflow, a national scale precipitation runoff model is in development. The effort uses the USGS Precipitation Runoff Modeling System (PRMS) model. The model development strategy includes methods to assign hydrologic routing coefficients a priori from national scale GIS data bases. Once developed, the model can serve as an initial baseline for more detailed and locally/regionally calibrated models designed for specific projects and purposes. One of the key hydrologic routing coefficients is the groundwater coefficient (gw_coef). This study estimates the gw_coef from continental US GIS data, including geology, drainage density, aquifer type, vegetation type, and baseflow index information. The gw_coef is applied in regional PRMS models and is estimated using two methods. The first method uses a statistical model to predict the gw_coef from weighted average values of surficial geologic materials, dominant aquifer type, baseflow index, vegetation type, and the drainage density. The second method computes the gw_coef directly from the physical conditions in the watershed including the percentage geologic material and the drainage density. The two methods are compared against the gw_coef derived from streamflow records, and tested for selected rivers in different regions of the country. To address the often weak correlation between geology and baseflow, the existence of groundwater sinks, and complexities of groundwater flow paths, the spatial characteristics of the gw_coef prediction error were evaluated, and a correction factor developed from the spatial error distribution. This provides a consistent and improved method to estimate the gw_coef for regional PRMS models that is derived from available GIS data and physical information for watersheds.
Directory of Open Access Journals (Sweden)
Adrian Nocoń
2015-09-01
Full Text Available This paper presents an analysis of the influence of uncertainty of power system mathematical model parameters on optimised parameters of PSS2A system stabilizers. Optimisation of power system stabilizer parameters was based on polyoptimisation (multi-criteria optimisation. Optimisation criteria were determined for disturbances occurring in a multi-machine power system, when taking into account transient waveforms associated with electromechanical swings (instantaneous power, angular speed and terminal voltage waveforms of generators. A genetic algorithm with floating-point encoding, tournament selection, mean crossover and perturbative mutations, modified for the needs of investigations, was used for optimisation. The impact of uncertainties on the quality of operation of power system stabilizers with optimised parameters has been evaluated using various deformation factors.
Sequential parameter estimation for stochastic systems
Directory of Open Access Journals (Sweden)
G. A. Kivman
2003-01-01
Full Text Available The quality of the prediction of dynamical system evolution is determined by the accuracy to which initial conditions and forcing are known. Availability of future observations permits reducing the effects of errors in assessment the external model parameters by means of a filtering algorithm. Usually, uncertainties in specifying internal model parameters describing the inner system dynamics are neglected. Since they are characterized by strongly non-Gaussian distributions (parameters are positive, as a rule, traditional Kalman filtering schemes are badly suited to reducing the contribution of this type of uncertainties to the forecast errors. An extension of the Sequential Importance Resampling filter (SIR is proposed to this aim. The filter is verified against the Ensemble Kalman filter (EnKF in application to the stochastic Lorenz system. It is shown that the SIR is capable of estimating the system parameters and to predict the evolution of the system with a remarkably better accuracy than the EnKF. This highlights a severe drawback of any Kalman filtering scheme: due to utilizing only first two statistical moments in the analysis step it is unable to deal with probability density functions badly approximated by the normal distribution.
Waszak, Martin R.; Fung, Jimmy
1998-01-01
This report describes the development of transfer function models for the trailing-edge and upper and lower spoiler actuators of the Benchmark Active Control Technology (BACT) wind tunnel model for application to control system analysis and design. A simple nonlinear least-squares parameter estimation approach is applied to determine transfer function parameters from frequency response data. Unconstrained quasi-Newton minimization of weighted frequency response error was employed to estimate the transfer function parameters. An analysis of the behavior of the actuators over time to assess the effects of wear and aerodynamic load by using the transfer function models is also presented. The frequency responses indicate consistent actuator behavior throughout the wind tunnel test and only slight degradation in effectiveness due to aerodynamic hinge loading. The resulting actuator models have been used in design, analysis, and simulation of controllers for the BACT to successfully suppress flutter over a wide range of conditions.
D'Agnese, F. A.; Faunt, C.C.; Hill, M.C.; Turner, A.K.
1999-01-01
A regional-scale, steady-state, saturated-zone ground-water flow model was constructed to evaluate potential regional ground-water flow in the vicinity of Yucca Mountain, Nevada. The model was limited to three layers in an effort to evaluate the characteristics governing large-scale subsurface flow. Geoscientific information systems (GSIS) were used to characterize the complex surface and subsurface hydrogeologic conditions of the area, and this characterization was used to construct likely conceptual models of the flow system. Subsurface properties in this system vary dramatically, producing high contrasts and abrupt contacts. This characteristic, combined with the large scale of the model, make zonation the logical choice for representing the hydraulic-conductivity distribution. Different conceptual models were evaluated using sensitivity analysis and were tested by using nonlinear regression to determine parameter values that are optimal, in that they provide the best match between the measured and simulated heads and flows. The different conceptual models were judged based both on the fit achieved to measured heads and spring flows, and the plausibility of the optimal parameter values. One of the conceptual models considered appears to represent the system most realistically. Any apparent model error is probably caused by the coarse vertical and horizontal discretization.A regional-scale, steady-state, saturated-zone ground-water flow model was constructed to evaluate potential regional ground-water flow in the vicinity of Yucca Mountain, Nevada. The model was limited to three layers in an effort to evaluate the characteristics governing large-scale subsurface flow. Geoscientific information systems (GSIS) were used to characterize the complex surface and subsurface hydrogeologic conditions of the area, and this characterization was used to construct likely conceptual models of the flow system. Subsurface properties in this system vary dramatically, producing
Parameter estimation applied to physiological systems
Rideout, V.C.; Beneken, J.E.W.
Parameter estimation techniques are of ever-increasing interest in the fields of medicine and biology, as greater efforts are currently being made to describe physiological systems in explicit quantitative form. Although some of the techniques of parameter estimation as developed for use in other
Composite likelihood estimation of demographic parameters
Directory of Open Access Journals (Sweden)
Garrigan Daniel
2009-11-01
Full Text Available Abstract Background Most existing likelihood-based methods for fitting historical demographic models to DNA sequence polymorphism data to do not scale feasibly up to the level of whole-genome data sets. Computational economies can be achieved by incorporating two forms of pseudo-likelihood: composite and approximate likelihood methods. Composite likelihood enables scaling up to large data sets because it takes the product of marginal likelihoods as an estimator of the likelihood of the complete data set. This approach is especially useful when a large number of genomic regions constitutes the data set. Additionally, approximate likelihood methods can reduce the dimensionality of the data by summarizing the information in the original data by either a sufficient statistic, or a set of statistics. Both composite and approximate likelihood methods hold promise for analyzing large data sets or for use in situations where the underlying demographic model is complex and has many parameters. This paper considers a simple demographic model of allopatric divergence between two populations, in which one of the population is hypothesized to have experienced a founder event, or population bottleneck. A large resequencing data set from human populations is summarized by the joint frequency spectrum, which is a matrix of the genomic frequency spectrum of derived base frequencies in two populations. A Bayesian Metropolis-coupled Markov chain Monte Carlo (MCMCMC method for parameter estimation is developed that uses both composite and likelihood methods and is applied to the three different pairwise combinations of the human population resequence data. The accuracy of the method is also tested on data sets sampled from a simulated population model with known parameters. Results The Bayesian MCMCMC method also estimates the ratio of effective population size for the X chromosome versus that of the autosomes. The method is shown to estimate, with reasonable
Directory of Open Access Journals (Sweden)
Velislava Lubenova
2009-03-01
Full Text Available New control inputs are introduced in the 5th order mass-balance non-linear model of the anaerobic digestion, which reflects the addition of stimulating substances (acetate and glucose. Laboratory experiments have been done with step-wise and pulse changes of these new inputs. On the basis of the step responses of the measured variables (biogas flow rate and acetate concentration in the bioreactor and iterative methodology, involving non-linear optimisation and simulations, the model coefficients have been estimated. The model validity has been proved by another set of experiments. The observation part is built on a two-step structure. One estimator and two observers are designed on the basis of this process model. Their stability has been proved and their performances have been investigated with experimental data and simulations.
Toward unbiased estimations of the statefinder parameters
Aviles, Alejandro; Klapp, Jaime; Luongo, Orlando
2017-09-01
With the use of simulated supernova catalogs, we show that the statefinder parameters turn out to be poorly and biased estimated by standard cosmography. To this end, we compute their standard deviations and several bias statistics on cosmologies near the concordance model, demonstrating that these are very large, making standard cosmography unsuitable for future and wider compilations of data. To overcome this issue, we propose a new method that consists in introducing the series of the Hubble function into the luminosity distance, instead of considering the usual direct Taylor expansions of the luminosity distance. Moreover, in order to speed up the numerical computations, we estimate the coefficients of our expansions in a hierarchical manner, in which the order of the expansion depends on the redshift of every single piece of data. In addition, we propose two hybrids methods that incorporates standard cosmography at low redshifts. The methods presented here perform better than the standard approach of cosmography both in the errors and bias of the estimated statefinders. We further propose a one-parameter diagnostic to reject non-viable methods in cosmography.
Kittisuwan, Pichid
2015-03-01
The application of image processing in industry has shown remarkable success over the last decade, for example, in security and telecommunication systems. The denoising of natural image corrupted by Gaussian noise is a classical problem in image processing. So, image denoising is an indispensable step during image processing. This paper is concerned with dual-tree complex wavelet-based image denoising using Bayesian techniques. One of the cruxes of the Bayesian image denoising algorithms is to estimate the statistical parameter of the image. Here, we employ maximum a posteriori (MAP) estimation to calculate local observed variance with generalized Gamma density prior for local observed variance and Laplacian or Gaussian distribution for noisy wavelet coefficients. Evidently, our selection of prior distribution is motivated by efficient and flexible properties of generalized Gamma density. The experimental results show that the proposed method yields good denoising results.
Parameter estimation and prediction of nonlinear biological systems: some examples
Doeswijk, T.G.; Keesman, K.J.
2006-01-01
Rearranging and reparameterizing a discrete-time nonlinear model with polynomial quotient structure in input, output and parameters (xk = f(Z, p)) leads to a model linear in its (new) parameters. As a result, the parameter estimation problem becomes a so-called errors-in-variables problem for which
Directory of Open Access Journals (Sweden)
Joan Guàrdia-Olmos
2018-02-01
Full Text Available Structural Equation Models (SEM is among of the most extensively applied statistical techniques in the study of human behavior in the fields of Neuroscience and Cognitive Neuroscience. This paper reviews the application of SEM to estimate functional and effective connectivity models in work published since 2001. The articles analyzed were compiled from Journal Citation Reports, PsycInfo, Pubmed, and Scopus, after searching with the following keywords: fMRI, SEMs, and Connectivity.Results: A 100 papers were found, of which 25 were rejected due to a lack of sufficient data on basic aspects of the construction of SEM. The other 75 were included and contained a total of 160 models to analyze, since most papers included more than one model. The analysis of the explained variance (R2 of each model yields an effect of the type of design used, the type of population studied, the type of study, the existence of recursive effects in the model, and the number of paths defined in the model. Along with these comments, a series of recommendations are included for the use of SEM to estimate of functional and effective connectivity models.
Energy Technology Data Exchange (ETDEWEB)
Meyer, Philip D.; Ye, Ming; Rockhold, Mark L.; Neuman, Shlomo P.; Cantrell, Kirk J.
2007-07-30
This report to the Nuclear Regulatory Commission (NRC) describes the development and application of a methodology to systematically and quantitatively assess predictive uncertainty in groundwater flow and transport modeling that considers the combined impact of hydrogeologic uncertainties associated with the conceptual-mathematical basis of a model, model parameters, and the scenario to which the model is applied. The methodology is based on a n extension of a Maximum Likelihood implementation of Bayesian Model Averaging. Model uncertainty is represented by postulating a discrete set of alternative conceptual models for a site with associated prior model probabilities that reflect a belief about the relative plausibility of each model based on its apparent consistency with available knowledge and data. Posterior model probabilities are computed and parameter uncertainty is estimated by calibrating each model to observed system behavior; prior parameter estimates are optionally included. Scenario uncertainty is represented as a discrete set of alternative future conditions affecting boundary conditions, source/sink terms, or other aspects of the models, with associated prior scenario probabilities. A joint assessment of uncertainty results from combining model predictions computed under each scenario using as weight the posterior model and prior scenario probabilities. The uncertainty methodology was applied to modeling of groundwater flow and uranium transport at the Hanford Site 300 Area. Eight alternative models representing uncertainty in the hydrogeologic and geochemical properties as well as the temporal variability were considered. Two scenarios represent alternative future behavior of the Columbia River adjacent to the site were considered. The scenario alternatives were implemented in the models through the boundary conditions. Results demonstrate the feasibility of applying a comprehensive uncertainty assessment to large-scale, detailed groundwater flow
Xiao, D.; Shi, Y.; Li, L.
2015-12-01
Parameter estimation is generally required for land surface models (LSMs) and hydrologic models to reproduce observed water and energy fluxes in different watersheds. Using soil moisture observations for parameter estimation in addition to discharge and land surface temperature observations can improve the prediction of land surface and subsurface processes. Due to their representativity, point measurements cannot capture the watershed-scale soil moisture conditions and may lead to notable bias in watershed soil moisture predictions if used for model calibration. The intermediate-scale cosmic-ray soil moisture observing system (COSMOS) provides average soil water content measurement over a footprint of 0.34 m2 and depths up to 50 cm, and may provide better calibration data for low-order watersheds. In this study, we will test using COSMOS observations for Flux-PIHM parameter and state estimation via the ensemble Kalman filter (EnKF). Flux-PIHM is a physically-based land surface hydrologic model that couples the Penn State Integrated Hydrologic Model (PIHM) with the Noah land surface model. Synthetic data experiments will be performed at the Shale Hills watershed (area: 0.08 km2, smaller than COSMOS footprint) and the Garner Run watershed (1.34 km2, larger than COSMOS footprint) in the Shale Hills Susquehanna Critical Zone Observatory in central Pennsylvania. COSMOS observations will be assimilated into Flux-PIHM using the EnKF, in addition to discharge and land surface temperature (LST) observations. The accuracy of EnKF estimated parameters and water and energy flux predictions will be evaluated. In addition, the results will be compared with assimilating point soil moisture measurement (in addition to discharge and LST), to assess the effects of using different scales of soil moisture observations for parameter estimation. The results at Shale Hills and Garner Run will be compared to test whether performance of COSMOS data assimilation is affected by the size of
Pham, B. H.; Brancherie, D.; Davenne, L.; Ibrahimbegovic, A.
2013-03-01
In this work, we present a new finite element for (geometrically linear) Timoshenko beam model for ultimate load computation of reinforced concrete frames. The proposed model combines the descriptions of the diffuse plastic failure in the beam-column followed by the creation of plastic hinges due to the failure or collapse of the concrete and of the re-bars. A modified multi-scale analysis is performed in order to identify the parameters for stress-resultant-based macro model, which is used to described the behavior of the Timoshenko beam element. For clarity, we focus upon the micro-scale models using the multi-fiber elements with embedded displacement discontinuities in mode I, which would typically be triggered by bending failure mode. More general case of micro-scale model capable of describing shear failure is described by Ibrahimbegovic et al. (Int J Numer Methods Eng 83(4):452-481, 2010).
A Novel Nonlinear Parameter Estimation Method of Soft Tissues
Directory of Open Access Journals (Sweden)
Qianqian Tong
2017-12-01
Full Text Available The elastic parameters of soft tissues are important for medical diagnosis and virtual surgery simulation. In this study, we propose a novel nonlinear parameter estimation method for soft tissues. Firstly, an in-house data acquisition platform was used to obtain external forces and their corresponding deformation values. To provide highly precise data for estimating nonlinear parameters, the measured forces were corrected using the constructed weighted combination forecasting model based on a support vector machine (WCFM_SVM. Secondly, a tetrahedral finite element parameter estimation model was established to describe the physical characteristics of soft tissues, using the substitution parameters of Young’s modulus and Poisson’s ratio to avoid solving complicated nonlinear problems. To improve the robustness of our model and avoid poor local minima, the initial parameters solved by a linear finite element model were introduced into the parameter estimation model. Finally, a self-adapting Levenberg–Marquardt (LM algorithm was presented, which is capable of adaptively adjusting iterative parameters to solve the established parameter estimation model. The maximum absolute error of our WCFM_SVM model was less than 0.03 Newton, resulting in more accurate forces in comparison with other correction models tested. The maximum absolute error between the calculated and measured nodal displacements was less than 1.5 mm, demonstrating that our nonlinear parameters are precise.
Flores, E B; van der Werf, J
2015-08-01
Heritabilities and genetic correlations for milk production traits were estimated from first-parity test day records on 1022 Philippine dairy buffalo cows. Traits analysed included milk (MY), fat (FY) and protein (PY) yields, and fat (Fat%) and protein (Prot%) concentrations. Varying orders of Legendre polynomials (Leg(m)) as well as the Wilmink function (Wil) were used in random regression models. These various models were compared based on log likelihood, Akaike's information criterion, Bayesian information criterion and genetic variance estimates. Six residual variance classes were sufficient for MY, FY, PY and Fat%, while seven residual classes for Prot%. Multivariate analysis gave higher estimates of genetic variance and heritability compared with univariate analysis for all traits. Heritability estimates ranged from 0.25 to 0.44, 0.13 to 0.31 and 0.21 to 0.36 for MY, FY and PY, respectively. Wilmink's function was the better fitting function for additive genetic effects for all traits. It was also the preferred function for permanent environment effects for Fat% and Prot%, but for MY, FY and PY, the Legm was the appropriate function. Genetic correlations of MY with FY and PY were high and they were moderately negative with Fat% and Prot%. To prevent deterioration in Fat% and Prot% and improve milk quality, more weight should be applied to milk component traits. © 2015 Blackwell Verlag GmbH.
Gait parameter and event estimation using smartphones.
Pepa, Lucia; Verdini, Federica; Spalazzi, Luca
2017-09-01
The use of smartphones can greatly help for gait parameters estimation during daily living, but its accuracy needs a deeper evaluation against a gold standard. The objective of the paper is a step-by-step assessment of smartphone performance in heel strike, step count, step period, and step length estimation. The influence of smartphone placement and orientation on estimation performance is evaluated as well. This work relies on a smartphone app developed to acquire, process, and store inertial sensor data and rotation matrices about device position. Smartphone alignment was evaluated by expressing the acceleration vector in three reference frames. Two smartphone placements were tested. Three methods for heel strike detection were considered. On the basis of estimated heel strikes, step count is performed, step period is obtained, and the inverted pendulum model is applied for step length estimation. Pearson correlation coefficient, absolute and relative errors, ANOVA, and Bland-Altman limits of agreement were used to compare smartphone estimation with stereophotogrammetry on eleven healthy subjects. High correlations were found between smartphone and stereophotogrammetric measures: up to 0.93 for step count, to 0.99 for heel strike, 0.96 for step period, and 0.92 for step length. Error ranges are comparable to those in the literature. Smartphone placement did not affect the performance. The major influence of acceleration reference frames and heel strike detection method was found in step count. This study provides detailed information about expected accuracy when smartphone is used as a gait monitoring tool. The obtained results encourage real life applications. Copyright © 2017 Elsevier B.V. All rights reserved.
Directory of Open Access Journals (Sweden)
S. Yu. Makarov
2015-01-01
Full Text Available The article dwells on a development of new non-invasive measurement methods of optical parameters of biological tissues, which are responsible for the scattering and absorption of monochromatic radiation. It is known from the theory of radiation transfer [1] that for strongly scattering media, to which many biological tissues pertain, such parameters are parameters of diffusion approximation, as well as a scattering coefficient and an anisotropy parameter.Based on statistical modeling the paper examines a spread of non-directional radiation from a Lambert light beam with the natural polarization that illuminates a surface of the biological tissue. Statistical modeling is based on the Monte Carlo method [2]. Thus, to have the correct energy coefficient values of Fresnel reflection and transmission in simulation of such radiation by Monte Carlo method the author uses his finding that is a function of the statistical representation for the incidence of model photons [3]. The paper describes in detail a principle of fixing the power transmitted by the non-directional radiation into biological tissue [3], and the equations of a power balance in this case.Further, the paper describes the diffusion approximation of a radiation transfer theory, often used in simulation of radiation propagation in strongly scattering media and shows its application in case of fixing the power transmitted into the tissue. Thus, to represent an uneven power distribution is used an approximating expression in conditions of fixing a total input power. The paper reveals behavior peculiarities of solution on the surface of the biological tissue inside and outside of the incident beam. It is shown that the solution in the region outside of the incident beam (especially far away from it, essentially, depends neither on the particular power distribution across the surface, being a part of the tissue, nor on the refractive index of the biological tissue. It is determined only by
A Comparative Study of Distribution System Parameter Estimation Methods
Energy Technology Data Exchange (ETDEWEB)
Sun, Yannan; Williams, Tess L.; Gourisetti, Sri Nikhil Gup
2016-07-17
In this paper, we compare two parameter estimation methods for distribution systems: residual sensitivity analysis and state-vector augmentation with a Kalman filter. These two methods were originally proposed for transmission systems, and are still the most commonly used methods for parameter estimation. Distribution systems have much lower measurement redundancy than transmission systems. Therefore, estimating parameters is much more difficult. To increase the robustness of parameter estimation, the two methods are applied with combined measurement snapshots (measurement sets taken at different points in time), so that the redundancy for computing the parameter values is increased. The advantages and disadvantages of both methods are discussed. The results of this paper show that state-vector augmentation is a better approach for parameter estimation in distribution systems. Simulation studies are done on a modified version of IEEE 13-Node Test Feeder with varying levels of measurement noise and non-zero error in the other system model parameters.
Statistical estimation of nuclear reactor dynamic parameters
International Nuclear Information System (INIS)
Cummins, J.D.
1962-02-01
This report discusses the study of the noise in nuclear reactors and associated power plant. The report is divided into three distinct parts. In the first part parameters which influence the dynamic behaviour of some reactors will be specified and their effect on dynamic performance described. Methods of estimating dynamic parameters using statistical signals will be described in detail together with descriptions of the usefulness of the results, the accuracy and related topics. Some experiments which have been and which might be performed on nuclear reactors will be described. In the second part of the report a digital computer programme will be described. The computer programme derives the correlation functions and the spectra of signals. The programme will compute the frequency response both gain and phase for physical items of plant for which simultaneous recordings of input and output signal variations have been made. Estimations of the accuracy of the correlation functions and the spectra may be computed using the programme and the amplitude distribution of signals may also b computed. The programme is written in autocode for the Ferranti Mercury computer. In the third part of the report a practical example of the use of the method and the digital programme is presented. In order to eliminate difficulties of interpretation a very simple plant model was chosen i.e. a simple first order lag. Several interesting properties of statistical signals were measured and will be discussed. (author)
Ojiambo, Peter S; Gent, David H; Mehra, Lucky K; Christie, David; Magarey, Roger
2017-01-01
Empirical and mechanistic modeling indicate that pathogens transmitted via aerially dispersed inoculum follow a power law, resulting in dispersive epidemic waves. The spread parameter ( b ) of the power law model, which is an indicator of the distance of the epidemic wave front from an initial focus per unit time, has been found to be approximately 2 for several animal and plant diseases over a wide range of spatial scales under conditions favorable for disease spread. Although disease spread and epidemic expansion can be influenced by several factors, the stability of the parameter b over multiple epidemic years has not been determined. Additionally, the size of the initial epidemic area is expected to be strongly related to the final epidemic extent for epidemics, but the stability of this relationship is also not well established. Here, empirical data of cucurbit downy mildew epidemics collected from 2008 to 2014 were analyzed using a spatio-temporal model of disease spread that incorporates logistic growth in time with a power law function for dispersal. Final epidemic extent ranged from 4.16 ×10 8 km 2 in 2012 to 6.44 ×10 8 km 2 in 2009. Current epidemic extent became significantly associated ( P power law model. These results suggest that the spread parameter b may not be stable over multiple epidemic years. However, b ≈ 2 may be considered the lower limit of the distance traveled by epidemic wave-fronts for aerially transmitted pathogens that follow a power law dispersal function.
On Carleman estimates with two large parameters
Energy Technology Data Exchange (ETDEWEB)
Le Rousseau, Jerome, E-mail: jlr@univ-orleans.fr [Jerome Le Rousseau. Universite d' Orleans, Laboratoire Mathematiques et Applications, Physique Mathematique d' Orleans, CNRS UMR 6628, Federation Denis-Poisson, FR CNRS 2964, B.P. 6759, 45067 Orleans cedex 2 (France)
2011-04-01
We provide a general framework for the analysis and the derivation of Carleman estimates with two large parameters. For an appropriate form of weight functions strong pseudo-convexity conditions are shown to be necessary and sufficient.
International Nuclear Information System (INIS)
Maurin, D.
2001-02-01
Dark matter is present at numerous scale of the universe (galaxy, cluster of galaxies, universe in the whole). This matter plays an important role in cosmology and can not be totally explained by conventional physic. From a particle physic point of view, there exists an extension of the standard model - supersymmetry - which predicts under certain conditions the existence of new stable and massive particles, the latter interacting weakly with ordinary matter. Apart from direct detection in accelerators, various indirect astrophysical detection are possible. This thesis focuses on one particular signature: disintegration of these particles could give antiprotons which should be measurable in cosmic rays. The present study evaluates the background corresponding to this signal i. e. antiprotons produced in the interactions between these cosmic rays and interstellar matter. In particular, uncertainties of this background being correlated to the uncertainties of the diffusion parameter, major part of this thesis is devoted to nuclei propagation. The first third of the thesis introduces propagation of cosmic rays in our galaxy, emphasizing the nuclear reaction responsibles of the nuclei fragmentation. In the second third, different models are reviewed, and in particular links between the leaky box model and the diffusion model are recalled (re-acceleration and convection are also discussed). This leads to a qualitative discussion about information that one can infer from propagation of these nuclei. In the last third, we finally present detailed solutions of the bidimensional diffusion model, along with constrains obtained on the propagation parameters. The latter is applied on the antiprotons background signal and it concludes the work done in this thesis. The propagation code for nuclei and antiprotons used here has proven its ability in data analysis. It would probably be of interest for the analysis of the cosmic ray data which will be taken by the AMS experiment on
Association measures and estimation of copula parameters ...
African Journals Online (AJOL)
We apply the inversion method of estimation, with several combinations of two among the four most popular association measures, to estimate the parameters of copulas in the case of bivariate distributions. We carry out a simulation study with two examples, namely Farlie-Gumbel-Morgenstern and Marshall-Olkin ...
Thai, Hoai-Thu; Mentré, France; Holford, Nicholas H G; Veyrat-Follet, Christine; Comets, Emmanuelle
2014-02-01
Bootstrap methods are used in many disciplines to estimate the uncertainty of parameters, including multi-level or linear mixed-effects models. Residual-based bootstrap methods which resample both random effects and residuals are an alternative approach to case bootstrap, which resamples the individuals. Most PKPD applications use the case bootstrap, for which software is available. In this study, we evaluated the performance of three bootstrap methods (case bootstrap, nonparametric residual bootstrap and parametric bootstrap) by a simulation study and compared them to that of an asymptotic method (Asym) in estimating uncertainty of parameters in nonlinear mixed-effects models (NLMEM) with heteroscedastic error. This simulation was conducted using as an example of the PK model for aflibercept, an anti-angiogenic drug. As expected, we found that the bootstrap methods provided better estimates of uncertainty for parameters in NLMEM with high nonlinearity and having balanced designs compared to the Asym, as implemented in MONOLIX. Overall, the parametric bootstrap performed better than the case bootstrap as the true model and variance distribution were used. However, the case bootstrap is faster and simpler as it makes no assumptions on the model and preserves both between subject and residual variability in one resampling step. The performance of the nonparametric residual bootstrap was found to be limited when applying to NLMEM due to its failure to reflate the variance before resampling in unbalanced designs where the Asym and the parametric bootstrap performed well and better than case bootstrap even with stratification.
Estimation of Modal Parameters and their Uncertainties
DEFF Research Database (Denmark)
Andersen, P.; Brincker, Rune
1999-01-01
In this paper it is shown how to estimate the modal parameters as well as their uncertainties using the prediction error method of a dynamic system on the basis of uotput measurements only. The estimation scheme is assessed by means of a simulation study. As a part of the introduction, an example...... is given showing how the uncertainty estimates can be used in applications such as damage detection....
Response model parameter linking
Barrett, M.L.D.
2015-01-01
With a few exceptions, the problem of linking item response model parameters from different item calibrations has been conceptualized as an instance of the problem of equating observed scores on different test forms. This thesis argues, however, that the use of item response models does not require
Perry, Russell W.; Plumb, John M.; Huntington, Charles
2015-01-01
To estimate the parameters that govern mass- and temperature-dependent growth, we conducted a meta-analysis of existing growth data from juvenile Chinook Salmon Oncorhynchus tshawytscha that were fed an ad libitum ration of a pelleted diet. Although the growth of juvenile Chinook Salmon has been well studied, research has focused on a single population, a narrow range of fish sizes, or a narrow range of temperatures. Therefore, we incorporated the Ratkowsky model for temperature-dependent growth into an allometric growth model; this model was then fitted to growth data from 11 data sources representing nine populations of juvenile Chinook Salmon. The model fit the growth data well, explaining 98% of the variation in final mass. The estimated allometric mass exponent (b) was 0.338 (SE = 0.025), similar to estimates reported for other salmonids. This estimate of b will be particularly useful for estimating mass-standardized growth rates of juvenile Chinook Salmon. In addition, the lower thermal limit, optimal temperature, and upper thermal limit for growth were estimated to be 1.8°C (SE = 0.63°C), 19.0°C (SE = 0.27°C), and 24.9°C (SE = 0.02°C), respectively. By taking a meta-analytical approach, we were able to provide a growth model that is applicable across populations of juvenile Chinook Salmon receiving an ad libitum ration of a pelleted diet.
Wang, Ruofan; Wang, Jiang; Deng, Bin; Liu, Chen; Wei, Xile; Tsang, K. M.; Chan, W. L.
2014-03-01
A combined method composing of the unscented Kalman filter (UKF) and the synchronization-based method is proposed for estimating electrophysiological variables and parameters of a thalamocortical (TC) neuron model, which is commonly used for studying Parkinson's disease for its relay role of connecting the basal ganglia and the cortex. In this work, we take into account the condition when only the time series of action potential with heavy noise are available. Numerical results demonstrate that not only this method can estimate model parameters from the extracted time series of action potential successfully but also the effect of its estimation is much better than the only use of the UKF or synchronization-based method, with a higher accuracy and a better robustness against noise, especially under the severe noise conditions. Considering the rather important role of TC neuron in the normal and pathological brain functions, the exploration of the method to estimate the critical parameters could have important implications for the study of its nonlinear dynamics and further treatment of Parkinson's disease.
International Nuclear Information System (INIS)
Wang, Ruofan; Wang, Jiang; Deng, Bin; Liu, Chen; Wei, Xile; Tsang, K. M.; Chan, W. L.
2014-01-01
A combined method composing of the unscented Kalman filter (UKF) and the synchronization-based method is proposed for estimating electrophysiological variables and parameters of a thalamocortical (TC) neuron model, which is commonly used for studying Parkinson's disease for its relay role of connecting the basal ganglia and the cortex. In this work, we take into account the condition when only the time series of action potential with heavy noise are available. Numerical results demonstrate that not only this method can estimate model parameters from the extracted time series of action potential successfully but also the effect of its estimation is much better than the only use of the UKF or synchronization-based method, with a higher accuracy and a better robustness against noise, especially under the severe noise conditions. Considering the rather important role of TC neuron in the normal and pathological brain functions, the exploration of the method to estimate the critical parameters could have important implications for the study of its nonlinear dynamics and further treatment of Parkinson's disease
Fall, C; Rogel-Dorantes, J A; Millán-Lagunas, E L; Martínez-García, C G; Silva-Hernández, B C; Silva-Trejo, F S
2014-12-01
Long-term aerobic digestion batch tests were performed on a sludge that contained mainly two fractions, a heterotrophic biomass XH and its endogenous residues XP, which were cultivated in conditions known to favor bio-storage (XSto). The objective was to model the stabilization of the sludge and determine the parameters of the endogenous decay processes, based on simultaneous measurements of the chemical oxygen demand (COD) and oxygen uptake rates (OUR). The respirograms were shown to have a two-phase structure that was describable with activated sludge model 3 (ASM3), but not with ASM1. Comparing the information from the COD and OUR data suggested the presence of two different groups of heterotrophs (XHa and XHb), one that decays with oxygen consumption and another without using O2. A modified ASM3 model was proposed, which was able to fit the OUR and COD data from the digesters, as well as cases from the literature. Copyright © 2014 Elsevier Ltd. All rights reserved.
Bayesian Parameter Estimation for Heavy-Duty Vehicles
Energy Technology Data Exchange (ETDEWEB)
Miller, Eric; Konan, Arnaud; Duran, Adam
2017-03-28
Accurate vehicle parameters are valuable for design, modeling, and reporting. Estimating vehicle parameters can be a very time-consuming process requiring tightly-controlled experimentation. This work describes a method to estimate vehicle parameters such as mass, coefficient of drag/frontal area, and rolling resistance using data logged during standard vehicle operation. The method uses Monte Carlo to generate parameter sets which is fed to a variant of the road load equation. Modeled road load is then compared to measured load to evaluate the probability of the parameter set. Acceptance of a proposed parameter set is determined using the probability ratio to the current state, so that the chain history will give a distribution of parameter sets. Compared to a single value, a distribution of possible values provides information on the quality of estimates and the range of possible parameter values. The method is demonstrated by estimating dynamometer parameters. Results confirm the method's ability to estimate reasonable parameter sets, and indicates an opportunity to increase the certainty of estimates through careful selection or generation of the test drive cycle.
On-Line Estimation of Allan Variance Parameters
National Research Council Canada - National Science Library
Ford, J
1999-01-01
... (Inertial Measurement Unit) gyros and accelerometers. The on-line method proposes a state space model and proposes parameter estimators for quantities previously measured from off-line data techniques such as the Allan variance graph...
On the Nature of SEM Estimates of ARMA Parameters.
Hamaker, Ellen L.; Dolan, Conor V.; Molenaar, Peter C. M.
2002-01-01
Reexamined the nature of structural equation modeling (SEM) estimates of autoregressive moving average (ARMA) models, replicated the simulation experiments of P. Molenaar, and examined the behavior of the log-likelihood ratio test. Simulation studies indicate that estimates of ARMA parameters observed with SEM software are identical to those…
Schubert, J. E.; Sanders, B. F.
2011-12-01
and drawbacks of each method, and to recommend best practices for future studies. The Baldwin Hills dam-break flood is interesting for a couple of reasons. First, the flood caused high velocity, rapidly varied flow through a residential neighborhood and extensive damage to dozens residential structures. These conditions pose a challenge for many numerical models, the test is a rigorous one. Second, previous research has shown that flood extent predictions are sensitive to topographic data and stream flow predictions are sensitive to resistance parameters. Given that the representation of buildings affects the modeling of topography and resistance, a sensitivity to the representation of buildings is expected. Lastly, the site is supported by excellent geospatial data including validation datasets, and is made available through the Los Angeles County Imagery Acquisition Consortium (LAR-IAC), a joint effort of many public agencies in Los Angeles County to provide county-wide data. Hence, a broader aim of this study is to characterize the most useful aspects of the LAR-IAC data from a flood mapping perspective.
Bayesian estimation of Weibull distribution parameters
International Nuclear Information System (INIS)
Bacha, M.; Celeux, G.; Idee, E.; Lannoy, A.; Vasseur, D.
1994-11-01
In this paper, we expose SEM (Stochastic Expectation Maximization) and WLB-SIR (Weighted Likelihood Bootstrap - Sampling Importance Re-sampling) methods which are used to estimate Weibull distribution parameters when data are very censored. The second method is based on Bayesian inference and allow to take into account available prior informations on parameters. An application of this method, with real data provided by nuclear power plants operation feedback analysis has been realized. (authors). 8 refs., 2 figs., 2 tabs
Robust Parameter and Signal Estimation in Induction Motors
DEFF Research Database (Denmark)
Børsting, H.
This thesis deals with theories and methods for robust parameter and signal estimation in induction motors. The project originates in industrial interests concerning sensor-less control of electrical drives. During the work, some general problems concerning estimation of signals and parameters...... in nonlinear systems, have been exposed. The main objectives of this project are: - analysis and application of theories and methods for robust estimation of parameters in a model structure, obtained from knowledge of the physics of the induction motor. - analysis and application of theories and methods...... for robust estimation of the rotor speed and driving torque of the induction motor based only on measurements of stator voltages and currents. Only contimuous-time models have been used, which means that physical related signals and parameters are estimated directly and not indirectly by some discrete...
MCMC for parameters estimation by bayesian approach
International Nuclear Information System (INIS)
Ait Saadi, H.; Ykhlef, F.; Guessoum, A.
2011-01-01
This article discusses the parameter estimation for dynamic system by a Bayesian approach associated with Markov Chain Monte Carlo methods (MCMC). The MCMC methods are powerful for approximating complex integrals, simulating joint distributions, and the estimation of marginal posterior distributions, or posterior means. The MetropolisHastings algorithm has been widely used in Bayesian inference to approximate posterior densities. Calibrating the proposal distribution is one of the main issues of MCMC simulation in order to accelerate the convergence.
Iterative methods for distributed parameter estimation in parabolic PDE
Energy Technology Data Exchange (ETDEWEB)
Vogel, C.R. [Montana State Univ., Bozeman, MT (United States); Wade, J.G. [Bowling Green State Univ., OH (United States)
1994-12-31
The goal of the work presented is the development of effective iterative techniques for large-scale inverse or parameter estimation problems. In this extended abstract, a detailed description of the mathematical framework in which the authors view these problem is presented, followed by an outline of the ideas and algorithms developed. Distributed parameter estimation problems often arise in mathematical modeling with partial differential equations. They can be viewed as inverse problems; the `forward problem` is that of using the fully specified model to predict the behavior of the system. The inverse or parameter estimation problem is: given the form of the model and some observed data from the system being modeled, determine the unknown parameters of the model. These problems are of great practical and mathematical interest, and the development of efficient computational algorithms is an active area of study.
Zhou, Si-Da; Ma, Yuan-Chen; Liu, Li; Kang, Jie; Ma, Zhi-Sai; Yu, Lei
2018-01-01
Identification of time-varying modal parameters contributes to the structural health monitoring, fault detection, vibration control, etc. of the operational time-varying structural systems. However, it is a challenging task because there is not more information for the identification of the time-varying systems than that of the time-invariant systems. This paper presents a vector time-dependent autoregressive model and least squares support vector machine based modal parameter estimator for linear time-varying structural systems in case of output-only measurements. To reduce the computational cost, a Wendland's compactly supported radial basis function is used to achieve the sparsity of the Gram matrix. A Gamma-test-based non-parametric approach of selecting the regularization factor is adapted for the proposed estimator to replace the time-consuming n-fold cross validation. A series of numerical examples have illustrated the advantages of the proposed modal parameter estimator on the suppression of the overestimate and the short data. A laboratory experiment has further validated the proposed estimator.
Directory of Open Access Journals (Sweden)
Shu Wing Ho
2011-12-01
Full Text Available The valuation of options and many other derivative instruments requires an estimation of exante or forward looking volatility. This paper adopts a Bayesian approach to estimate stock price volatility. We find evidence that overall Bayesian volatility estimates more closely approximate the implied volatility of stocks derived from traded call and put options prices compared to historical volatility estimates sourced from IVolatility.com (“IVolatility”. Our evidence suggests use of the Bayesian approach to estimate volatility can provide a more accurate measure of ex-ante stock price volatility and will be useful in the pricing of derivative securities where the implied stock price volatility cannot be observed.
Kinetic parameter estimation from attenuated SPECT projection measurements
International Nuclear Information System (INIS)
Reutter, B.W.; Gullberg, G.T.
1998-01-01
Conventional analysis of dynamically acquired nuclear medicine data involves fitting kinetic models to time-activity curves generated from regions of interest defined on a temporal sequence of reconstructed images. However, images reconstructed from the inconsistent projections of a time-varying distribution of radiopharmaceutical acquired by a rotating SPECT system can contain artifacts that lead to biases in the estimated kinetic parameters. To overcome this problem the authors investigated the estimation of kinetic parameters directly from projection data by modeling the data acquisition process. To accomplish this it was necessary to parametrize the spatial and temporal distribution of the radiopharmaceutical within the SPECT field of view. In a simulated transverse slice, kinetic parameters were estimated for simple one compartment models for three myocardial regions of interest, as well as for the liver. Myocardial uptake and washout parameters estimated by conventional analysis of noiseless simulated data had biases ranging between 1--63%. Parameters estimated directly from the noiseless projection data were unbiased as expected, since the model used for fitting was faithful to the simulation. Predicted uncertainties (standard deviations) of the parameters obtained for 500,000 detected events ranged between 2--31% for the myocardial uptake parameters and 2--23% for the myocardial washout parameters
Parameter estimation methods for chaotic intercellular networks.
Directory of Open Access Journals (Sweden)
Inés P Mariño
Full Text Available We have investigated simulation-based techniques for parameter estimation in chaotic intercellular networks. The proposed methodology combines a synchronization-based framework for parameter estimation in coupled chaotic systems with some state-of-the-art computational inference methods borrowed from the field of computational statistics. The first method is a stochastic optimization algorithm, known as accelerated random search method, and the other two techniques are based on approximate Bayesian computation. The latter is a general methodology for non-parametric inference that can be applied to practically any system of interest. The first method based on approximate Bayesian computation is a Markov Chain Monte Carlo scheme that generates a series of random parameter realizations for which a low synchronization error is guaranteed. We show that accurate parameter estimates can be obtained by averaging over these realizations. The second ABC-based technique is a Sequential Monte Carlo scheme. The algorithm generates a sequence of "populations", i.e., sets of randomly generated parameter values, where the members of a certain population attain a synchronization error that is lesser than the error attained by members of the previous population. Again, we show that accurate estimates can be obtained by averaging over the parameter values in the last population of the sequence. We have analysed how effective these methods are from a computational perspective. For the numerical simulations we have considered a network that consists of two modified repressilators with identical parameters, coupled by the fast diffusion of the autoinducer across the cell membranes.
Accelerated maximum likelihood parameter estimation for stochastic biochemical systems
Directory of Open Access Journals (Sweden)
Daigle Bernie J
2012-05-01
Full Text Available Abstract Background A prerequisite for the mechanistic simulation of a biochemical system is detailed knowledge of its kinetic parameters. Despite recent experimental advances, the estimation of unknown parameter values from observed data is still a bottleneck for obtaining accurate simulation results. Many methods exist for parameter estimation in deterministic biochemical systems; methods for discrete stochastic systems are less well developed. Given the probabilistic nature of stochastic biochemical models, a natural approach is to choose parameter values that maximize the probability of the observed data with respect to the unknown parameters, a.k.a. the maximum likelihood parameter estimates (MLEs. MLE computation for all but the simplest models requires the simulation of many system trajectories that are consistent with experimental data. For models with unknown parameters, this presents a computational challenge, as the generation of consistent trajectories can be an extremely rare occurrence. Results We have developed Monte Carlo Expectation-Maximization with Modified Cross-Entropy Method (MCEM2: an accelerated method for calculating MLEs that combines advances in rare event simulation with a computationally efficient version of the Monte Carlo expectation-maximization (MCEM algorithm. Our method requires no prior knowledge regarding parameter values, and it automatically provides a multivariate parameter uncertainty estimate. We applied the method to five stochastic systems of increasing complexity, progressing from an analytically tractable pure-birth model to a computationally demanding model of yeast-polarization. Our results demonstrate that MCEM2 substantially accelerates MLE computation on all tested models when compared to a stand-alone version of MCEM. Additionally, we show how our method identifies parameter values for certain classes of models more accurately than two recently proposed computationally efficient methods
Li, Xiaoyu; Pan, Ke; Fan, Guodong; Lu, Rengui; Zhu, Chunbo; Rizzoni, Giorgio; Canova, Marcello
2017-11-01
State of energy (SOE) is an important index for the electrochemical energy storage system in electric vehicles. In this paper, a robust state of energy estimation method in combination with a physical model parameter identification method is proposed to achieve accurate battery state estimation at different operating conditions and different aging stages. A physics-based fractional order model with variable solid-state diffusivity (FOM-VSSD) is used to characterize the dynamic performance of a LiFePO4/graphite battery. In order to update the model parameter automatically at different aging stages, a multi-step model parameter identification method based on the lexicographic optimization is especially designed for the electric vehicle operating conditions. As the battery available energy changes with different applied load current profiles, the relationship between the remaining energy loss and the state of charge, the average current as well as the average squared current is modeled. The SOE with different operating conditions and different aging stages are estimated based on an adaptive fractional order extended Kalman filter (AFEKF). Validation results show that the overall SOE estimation error is within ±5%. The proposed method is suitable for the electric vehicle online applications.
Kalman filter data assimilation: targeting observations and parameter estimation.
Bellsky, Thomas; Kostelich, Eric J; Mahalov, Alex
2014-06-01
This paper studies the effect of targeted observations on state and parameter estimates determined with Kalman filter data assimilation (DA) techniques. We first provide an analytical result demonstrating that targeting observations within the Kalman filter for a linear model can significantly reduce state estimation error as opposed to fixed or randomly located observations. We next conduct observing system simulation experiments for a chaotic model of meteorological interest, where we demonstrate that the local ensemble transform Kalman filter (LETKF) with targeted observations based on largest ensemble variance is skillful in providing more accurate state estimates than the LETKF with randomly located observations. Additionally, we find that a hybrid ensemble Kalman filter parameter estimation method accurately updates model parameters within the targeted observation context to further improve state estimation.
Parameter Estimation in Active Plate Structures
DEFF Research Database (Denmark)
Araujo, A. L.; Lopes, H. M. R.; Vaz, M. A. P.
2006-01-01
In this paper two non-destructive methods for elastic and piezoelectric parameter estimation in active plate structures with surface bonded piezoelectric patches are presented. These methods rely on experimental undamped natural frequencies of free vibration. The first solves the inverse problem...
Using Digital Filtration for Hurst Parameter Estimation
Directory of Open Access Journals (Sweden)
J. Prochaska
2009-06-01
Full Text Available We present a new method to estimate the Hurst parameter. The method exploits the form of the autocorrelation function for second-order self-similar processes and is based on one-pass digital filtration. We compare the performance and properties of the new method with that of the most common methods.
Simultaneous estimation of earthquake source parameters and ...
Indian Academy of Sciences (India)
This paper presents the simultaneous estimation of source parameters and crustal Q values for small to moderate-size aftershocks ( 2.1–5.1) of the 7.7 2001 Bhuj earthquake. The horizontal-component S-waves of 144 well located earthquakes (2001–2010) recorded at 3–10 broadband seismograph sites in the ...
Simultaneous estimation of earthquake source parameters and ...
Indian Academy of Sciences (India)
This paper presents the simultaneous estimation of source parameters and crustal Q values for small to moderate-size aftershocks (Mw 2.1–5.1) of the Mw 7.7 2001 Bhuj earthquake. The horizontal-component. S-waves of 144 well located earthquakes (2001–2010) recorded at 3–10 broadband seismograph sites in.
DEFF Research Database (Denmark)
Ditlevsen, Susanne; Yip, Kay-Pong; Marsh, Donald J
2006-01-01
Proximal tubular pressure shows periodic self-sustained oscillations in normotensive rats but highly irregular fluctuations in spontaneously hypertensive rats (SHR). Although we have suggested that the irregular fluctuations in SHR represent low-dimensional deterministic chaos in tubuloglomerular...... mechanisms not included explicitly in the model. In its deterministic version, the model can have chaotic dynamics arising from TGF. The model introduces random fluctuations into a parameter that determines the gain of TGF. The model shows a rich variety of dynamics ranging from low-dimensional deterministic...... oscillations and chaos to high-dimensional random fluctuations. To fit the data from normotensive rats, the model must introduce only a small variation in the feedback gain, and its estimates of that gain agree well with experimental values. These results support the use of the deterministic model of nephron...
ESTIMATION OF SHEAR STRENGTH PARAMETERS OF ...
African Journals Online (AJOL)
This research work seeks to develop models for predicting the shear strength parameters (cohesion and angle of friction) of lateritic soils in central and southern areas of Delta State using artificial neural network modeling technique. The application of these models will help reduce cost and time in acquiring geotechnical ...
Parameter Estimation of Damped Compound Pendulum Using Bat Algorithm
Directory of Open Access Journals (Sweden)
Saad Mohd Sazli
2016-01-01
Full Text Available In this study, the parameter identification of the damped compound pendulum system is proposed using one of the most promising nature inspired algorithms which is Bat Algorithm (BA. The procedure used to achieve the parameter identification of the experimental system consists of input-output data collection, ARX model order selection and parameter estimation using bat algorithm (BA method. PRBS signal is used as an input signal to regulate the motor speed. Whereas, the output signal is taken from position sensor. Both, input and output data is used to estimate the parameter of the autoregressive with exogenous input (ARX model. The performance of the model is validated using mean squares error (MSE between the actual and predicted output responses of the models. Finally, comparative study is conducted between BA and the conventional estimation method (i.e. Least Square. Based on the results obtained, MSE produce from Bat Algorithm (BA is outperformed the Least Square (LS method.
Application of spreadsheet to estimate infiltration parameters
Zakwan, Mohammad; Muzzammil, Mohammad; Alam, Javed
2016-01-01
Infiltration is the process of flow of water into the ground through the soil surface. Soil water although contributes a negligible fraction of total water present on earth surface, but is of utmost importance for plant life. Estimation of infiltration rates is of paramount importance for estimation of effective rainfall, groundwater recharge, and designing of irrigation systems. Numerous infiltration models are in use for estimation of infiltration rates. The conventional graphical approach ...
Sensor Placement for Modal Parameter Subset Estimation
DEFF Research Database (Denmark)
Ulriksen, Martin Dalgaard; Bernal, Dionisio; Damkilde, Lars
2016-01-01
). It is shown that the widely used Effective Independence (EI) method, which uses the modal amplitudes as surrogates for the parameters of interest, provides sensor configurations yielding theoretical lower bound variances whose maxima are up to 30 % larger than those obtained by use of the max-min approach.......The present paper proposes an approach for deciding on sensor placements in the context of modal parameter estimation from vibration measurements. The approach is based on placing sensors, of which the amount is determined a priori, such that the minimum Fisher information that the frequency...... responses carry on the selected modal parameter subset is, in some sense, maximized. The approach is validated in the context of a simple 10-DOF mass-spring-damper system by computing the variance of a set of identified modal parameters in a Monte Carlo setting for a set of sensor configurations, whose...
Response-Based Estimation of Sea State Parameters
DEFF Research Database (Denmark)
Nielsen, Ulrik Dam
2007-01-01
Reliable estimation of the on-site sea state parameters is essential to decision support systems for safe navigation of ships. The sea state parameters can be estimated by Bayesian Modelling which uses complex-valued frequency response functions (FRF) to estimate the wave spectrum on the basis...... of measured ship responses. It is therefore interesting to investigate how the filtering aspect, introduced by FRF, affects the final outcome of the estimation procedures. The paper contains a study based on numerical generated time series, and the study shows that filtering has an influence...
Dane, J.H.; Vrugt, J.A.; Unsal, E.
2011-01-01
Prediction of flow and transport through unsaturated porous media requires knowledge of the water retention and unsaturated hydraulic conductivity functions. In the past few decades many different laboratory procedures have been developed to estimate these hydraulic properties. Most of these
Nonparametric estimation of location and scale parameters
Potgieter, C.J.
2012-12-01
Two random variables X and Y belong to the same location-scale family if there are constants μ and σ such that Y and μ+σX have the same distribution. In this paper we consider non-parametric estimation of the parameters μ and σ under minimal assumptions regarding the form of the distribution functions of X and Y. We discuss an approach to the estimation problem that is based on asymptotic likelihood considerations. Our results enable us to provide a methodology that can be implemented easily and which yields estimators that are often near optimal when compared to fully parametric methods. We evaluate the performance of the estimators in a series of Monte Carlo simulations. © 2012 Elsevier B.V. All rights reserved.
Global parameter estimation methods for stochastic biochemical systems
Directory of Open Access Journals (Sweden)
Poovathingal Suresh
2010-08-01
Full Text Available Abstract Background The importance of stochasticity in cellular processes having low number of molecules has resulted in the development of stochastic models such as chemical master equation. As in other modelling frameworks, the accompanying rate constants are important for the end-applications like analyzing system properties (e.g. robustness or predicting the effects of genetic perturbations. Prior knowledge of kinetic constants is usually limited and the model identification routine typically includes parameter estimation from experimental data. Although the subject of parameter estimation is well-established for deterministic models, it is not yet routine for the chemical master equation. In addition, recent advances in measurement technology have made the quantification of genetic substrates possible to single molecular levels. Thus, the purpose of this work is to develop practical and effective methods for estimating kinetic model parameters in the chemical master equation and other stochastic models from single cell and cell population experimental data. Results Three parameter estimation methods are proposed based on the maximum likelihood and density function distance, including probability and cumulative density functions. Since stochastic models such as chemical master equations are typically solved using a Monte Carlo approach in which only a finite number of Monte Carlo realizations are computationally practical, specific considerations are given to account for the effect of finite sampling in the histogram binning of the state density functions. Applications to three practical case studies showed that while maximum likelihood method can effectively handle low replicate measurements, the density function distance methods, particularly the cumulative density function distance estimation, are more robust in estimating the parameters with consistently higher accuracy, even for systems showing multimodality. Conclusions The parameter
Distribution Line Parameter Estimation Under Consideration of Measurement Tolerances
DEFF Research Database (Denmark)
Prostejovsky, Alexander; Gehrke, Oliver; Kosek, Anna Magdalena
2016-01-01
State estimation and control approaches in electric distribution grids rely on precise electric models that may be inaccurate. This work presents a novel method of estimating distribution line parameters using only root mean square voltage and power measurements under consideration of measurement...
Estimating RASATI scores using acoustical parameters
International Nuclear Information System (INIS)
Agüero, P D; Tulli, J C; Moscardi, G; Gonzalez, E L; Uriz, A J
2011-01-01
Acoustical analysis of speech using computers has reached an important development in the latest years. The subjective evaluation of a clinician is complemented with an objective measure of relevant parameters of voice. Praat, MDVP (Multi Dimensional Voice Program) and SAV (Software for Voice Analysis) are some examples of software for speech analysis. This paper describes an approach to estimate the subjective characteristics of RASATI scale given objective acoustical parameters. Two approaches were used: linear regression with non-negativity constraints, and neural networks. The experiments show that such approach gives correct evaluations with ±1 error in 80% of the cases.
Naserkheil, Masoumeh; Miraie-Ashtiani, Seyed Reza; Nejati-Javaremi, Ardeshir; Son, Jihyun; Lee, Deukhwan
2016-12-01
The objective of this study was to estimate the genetic parameters of milk protein yields in Iranian Holstein dairy cattle. A total of 1,112,082 test-day milk protein yield records of 167,269 first lactation Holstein cows, calved from 1990 to 2010, were analyzed. Estimates of the variance components, heritability, and genetic correlations for milk protein yields were obtained using a random regression test-day model. Milking times, herd, age of recording, year, and month of recording were included as fixed effects in the model. Additive genetic and permanent environmental random effects for the lactation curve were taken into account by applying orthogonal Legendre polynomials of the fourth order in the model. The lowest and highest additive genetic variances were estimated at the beginning and end of lactation, respectively. Permanent environmental variance was higher at both extremes. Residual variance was lowest at the middle of the lactation and contrarily, heritability increased during this period. Maximum heritability was found during the 12th lactation stage (0.213±0.007). Genetic, permanent, and phenotypic correlations among test-days decreased as the interval between consecutive test-days increased. A relatively large data set was used in this study; therefore, the estimated (co)variance components for random regression coefficients could be used for national genetic evaluation of dairy cattle in Iran.
Energy Technology Data Exchange (ETDEWEB)
Lopez, Patricia M. [Instituo Nacional del Agua y del ambiente (Argentina); Seoane, Rafael S. [Consejo Nacional de Investigaciones Cientificas y Tecnicas (Argentina)
1999-08-01
An indirect estimation method of the scale parameter in the Nash model considering basin climatic and geomorphologic characteristics is proposed. The proposal links the results of the dependence of instantaneous unit hydrograph model parameters with Horton's laws and basic expressions of the geomorphoclitina with different characteristics and the comparison between their simulated hydrographs with indirect estimation and with moment method estimation are presented. [Spanish] En este trabajo se propone una metodologia de estimacion indirecta del parametro de escala de modelo de Nash, que permite considerar las caracteristicas geomorfologicas y climaticas de la cuenca. El metodo de estimacion propuesto utiliza la tecnica de la funcion de densidad de probabilidad derivada para integrar resultados que muestran la dependencia de los parametros del modelo del hidrograma unitario instantaneo, con las leyes de Horton y con expresiones basicas de la teoria del geomorfoclimatico. La metodologia es aplicada a dos cuencas de la Republica Argentina con caracteristicas climaticas diferentes y sus resultados son comparados con los caudales estimados con el metodo directo de los momentos.
Lika, K.; Kearney, M.R.; Freitas, V.; van der Veer, H.W.; van der Meer, J.; Wijsman, J.W.M.; Pecquerie, L.; Kooijman, S.A.L.M.
2011-01-01
The Dynamic Energy Budget (DEB) theory for metabolic organisation captures the processes of development, growth, maintenance, reproduction and ageing for any kind of organism throughout its life-cycle. However, the application of DEB theory is challenging because the state variables and parameters
Poggio, D; Walker, M; Nimmo, W; Ma, L; Pourkashanian, M
2016-07-01
This work proposes a novel and rigorous substrate characterisation methodology to be used with ADM1 to simulate the anaerobic digestion of solid organic waste. The proposed method uses data from both direct substrate analysis and the methane production from laboratory scale anaerobic digestion experiments and involves assessment of four substrate fractionation models. The models partition the organic matter into a mixture of particulate and soluble fractions with the decision on the most suitable model being made on quality of fit between experimental and simulated data and the uncertainty of the calibrated parameters. The method was tested using samples of domestic green and food waste and using experimental data from both short batch tests and longer semi-continuous trials. The results showed that in general an increased fractionation model complexity led to better fit but with increased uncertainty. When using batch test data the most suitable model for green waste included one particulate and one soluble fraction, whereas for food waste two particulate fractions were needed. With richer semi-continuous datasets, the parameter estimation resulted in less uncertainty therefore allowing the description of the substrate with a more complex model. The resulting substrate characterisations and fractionation models obtained from batch test data, for both waste samples, were used to validate the method using semi-continuous experimental data and showed good prediction of methane production, biogas composition, total and volatile solids, ammonia and alkalinity. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Coppens, Marc J.; Eleveld, Douglas J.; Proost, Johannes H.; Marks, Luc A. M.; Van Bocxlaer, Jan F. P.; Vereecke, Hugo; Absalom, Anthony R.; Struys, Michel M. R. F.
Background: To study propofol pharmacodynamics in a clinical setting a pharmacokinetic model must be used to predict drug plasma concentrations. Some investigators use a population pharmacokinetic model from existing literature and minimize the pharmacodynamic objective function. The purpose of the
International Nuclear Information System (INIS)
Park, Kiwon; Lee, Hyungki
2012-01-01
The present study investigated a sensor system to effectively detect the bending angles applied on an ionic polymer metal composite sensor. Firstly, the amount of net charge produced by the motion of cations was correlated to the bending angle based on the geometric relationship between a flat and a bent IPMC, and the relationship was represented by linear and nonlinear polynomial equations. Secondly, several existing and modified R and C circuit models with a linear charge model were evaluated using the experimental data. Thirdly, the nonlinear charge model was applied to a selected circuit model, and the effectivenesses of the linear and the nonlinear charge models were compared. Finally, the sensor output signal was fed into the inverse model of the identified circuit model to reproduce the bending angles. This paper presents a simple data processing procedure using the inverse transfer function of a selected circuit model that successfully monitored various bending motions of an IPMC sensor.
2012-09-30
atmospheric models and the chaotic growth of initial-condition (IC) error. The aim of our work is to provide new methods that begin to systematically disentangle the model inadequacy signal from the initial condition error signal.
Traveltime approximations and parameter estimation for orthorhombic media
Masmoudi, Nabil
2016-05-30
Building anisotropy models is necessary for seismic modeling and imaging. However, anisotropy estimation is challenging due to the trade-off between inhomogeneity and anisotropy. Luckily, we can estimate the anisotropy parameters Building anisotropy models is necessary for seismic modeling and imaging. However, anisotropy estimation is challenging due to the trade-off between inhomogeneity and anisotropy. Luckily, we can estimate the anisotropy parameters if we relate them analytically to traveltimes. Using perturbation theory, we have developed traveltime approximations for orthorhombic media as explicit functions of the anellipticity parameters η1, η2, and Δχ in inhomogeneous background media. The parameter Δχ is related to Tsvankin-Thomsen notation and ensures easier computation of traveltimes in the background model. Specifically, our expansion assumes an inhomogeneous ellipsoidal anisotropic background model, which can be obtained from well information and stacking velocity analysis. We have used the Shanks transform to enhance the accuracy of the formulas. A homogeneous medium simplification of the traveltime expansion provided a nonhyperbolic moveout description of the traveltime that was more accurate than other derived approximations. Moreover, the formulation provides a computationally efficient tool to solve the eikonal equation of an orthorhombic medium, without any constraints on the background model complexity. Although, the expansion is based on the factorized representation of the perturbation parameters, smooth variations of these parameters (represented as effective values) provides reasonable results. Thus, this formulation provides a mechanism to estimate the three effective parameters η1, η2, and Δχ. We have derived Dix-type formulas for orthorhombic medium to convert the effective parameters to their interval values.
Dynamic Mode Decomposition based on Kalman Filter for Parameter Estimation
Shibata, Hisaichi; Nonomura, Taku; Takaki, Ryoji
2017-11-01
With the development of computational fluid dynamics, large-scale data can now be obtained. In order to model physical phenomena from such data, it is required to extract features of flow field. Dynamic mode decomposition (DMD) is a method which meets the request. DMD can compute dominant eigenmodes of flow field by approximating system matrix. From this point of view, DMD can be considered as parameter estimation of system matrix. To estimate such parameters, we propose a novel method based on Kalman filter. Our numerical experiments indicated that the proposed method can estimate the parameters more accurately if it is compared with standard DMD methods. With this method, it is also possible to improve the parameter estimation accuracy if characteristics of noise acting on the system is given.
Directory of Open Access Journals (Sweden)
Mathieu Andraud
2018-01-01
Full Text Available The outputs of epidemiological models are strongly related to the structure of the model and input parameters. The latter are defined by fitting theoretical concepts to actual data derived from field or experimental studies. However, some parameters may remain difficult to estimate and are subject to uncertainty or sensitivity analyses to determine their variation range and their global impact on model outcomes. As such, the evaluation of immunity duration is often a puzzling issue requiring long-term follow-up data that are, most of time, not available. The present analysis aims at characterizing the kinetics of antibodies against Porcine Reproductive and Respiratory Syndrome virus (PRRSv from longitudinal data sets. The first data set consisted in the serological follow-up of 22 vaccinated gilts during 21 weeks post-vaccination (PV. The second one gathered the maternally derived antibodies (MDAs kinetics in piglets from three different farms up to 14 weeks of age. The peak of the PV serological response against PRRSv was reached 6.9 weeks PV on average with an average duration of antibodies persistence of 26.5 weeks. In the monitored cohort of piglets, the duration of passive immunity was found relatively short, with an average duration of 4.8 weeks. The level of PRRSv-MDAs was found correlated with the dams’ antibody titer at birth, and the antibody persistence was strongly related to the initial MDAs titers in piglets. These results evidenced the importance of PRRSv vaccination schedule in sows, to optimize the delivery of antibodies to suckling piglets. These estimates of the duration of active and passive immunity could be further used as input parameters of epidemiological models to analyze their impact on the persistence of PRRSv within farms.
Andraud, Mathieu; Fablet, Christelle; Renson, Patricia; Eono, Florent; Mahé, Sophie; Bourry, Olivier; Rose, Nicolas
2018-01-01
The outputs of epidemiological models are strongly related to the structure of the model and input parameters. The latter are defined by fitting theoretical concepts to actual data derived from field or experimental studies. However, some parameters may remain difficult to estimate and are subject to uncertainty or sensitivity analyses to determine their variation range and their global impact on model outcomes. As such, the evaluation of immunity duration is often a puzzling issue requiring long-term follow-up data that are, most of time, not available. The present analysis aims at characterizing the kinetics of antibodies against Porcine Reproductive and Respiratory Syndrome virus (PRRSv) from longitudinal data sets. The first data set consisted in the serological follow-up of 22 vaccinated gilts during 21 weeks post-vaccination (PV). The second one gathered the maternally derived antibodies (MDAs) kinetics in piglets from three different farms up to 14 weeks of age. The peak of the PV serological response against PRRSv was reached 6.9 weeks PV on average with an average duration of antibodies persistence of 26.5 weeks. In the monitored cohort of piglets, the duration of passive immunity was found relatively short, with an average duration of 4.8 weeks. The level of PRRSv-MDAs was found correlated with the dams' antibody titer at birth, and the antibody persistence was strongly related to the initial MDAs titers in piglets. These results evidenced the importance of PRRSv vaccination schedule in sows, to optimize the delivery of antibodies to suckling piglets. These estimates of the duration of active and passive immunity could be further used as input parameters of epidemiological models to analyze their impact on the persistence of PRRSv within farms.
Andraud, Mathieu; Fablet, Christelle; Renson, Patricia; Eono, Florent; Mahé, Sophie; Bourry, Olivier; Rose, Nicolas
2018-01-01
The outputs of epidemiological models are strongly related to the structure of the model and input parameters. The latter are defined by fitting theoretical concepts to actual data derived from field or experimental studies. However, some parameters may remain difficult to estimate and are subject to uncertainty or sensitivity analyses to determine their variation range and their global impact on model outcomes. As such, the evaluation of immunity duration is often a puzzling issue requiring long-term follow-up data that are, most of time, not available. The present analysis aims at characterizing the kinetics of antibodies against Porcine Reproductive and Respiratory Syndrome virus (PRRSv) from longitudinal data sets. The first data set consisted in the serological follow-up of 22 vaccinated gilts during 21 weeks post-vaccination (PV). The second one gathered the maternally derived antibodies (MDAs) kinetics in piglets from three different farms up to 14 weeks of age. The peak of the PV serological response against PRRSv was reached 6.9 weeks PV on average with an average duration of antibodies persistence of 26.5 weeks. In the monitored cohort of piglets, the duration of passive immunity was found relatively short, with an average duration of 4.8 weeks. The level of PRRSv-MDAs was found correlated with the dams’ antibody titer at birth, and the antibody persistence was strongly related to the initial MDAs titers in piglets. These results evidenced the importance of PRRSv vaccination schedule in sows, to optimize the delivery of antibodies to suckling piglets. These estimates of the duration of active and passive immunity could be further used as input parameters of epidemiological models to analyze their impact on the persistence of PRRSv within farms. PMID:29435455
Kalman filter estimation of RLC parameters for UMP transmission line
Directory of Open Access Journals (Sweden)
Mohd Amin Siti Nur Aishah
2018-01-01
Full Text Available This paper present the development of Kalman filter that allows evaluation in the estimation of resistance (R, inductance (L, and capacitance (C values for Universiti Malaysia Pahang (UMP short transmission line. To overcome the weaknesses of existing system such as power losses in the transmission line, Kalman Filter can be a better solution to estimate the parameters. The aim of this paper is to estimate RLC values by using Kalman filter that in the end can increase the system efficiency in UMP. In this research, matlab simulink model is developed to analyse the UMP short transmission line by considering different noise conditions to reprint certain unknown parameters which are difficult to predict. The data is then used for comparison purposes between calculated and estimated values. The results have illustrated that the Kalman Filter estimate accurately the RLC parameters with less error. The comparison of accuracy between Kalman Filter and Least Square method is also presented to evaluate their performances.
Methods of statistical model estimation
Hilbe, Joseph
2013-01-01
Methods of Statistical Model Estimation examines the most important and popular methods used to estimate parameters for statistical models and provide informative model summary statistics. Designed for R users, the book is also ideal for anyone wanting to better understand the algorithms used for statistical model fitting. The text presents algorithms for the estimation of a variety of regression procedures using maximum likelihood estimation, iteratively reweighted least squares regression, the EM algorithm, and MCMC sampling. Fully developed, working R code is constructed for each method. Th
Tsunami Prediction and Earthquake Parameters Estimation in the Red Sea
Sawlan, Zaid A
2012-12-01
Tsunami concerns have increased in the world after the 2004 Indian Ocean tsunami and the 2011 Tohoku tsunami. Consequently, tsunami models have been developed rapidly in the last few years. One of the advanced tsunami models is the GeoClaw tsunami model introduced by LeVeque (2011). This model is adaptive and consistent. Because of different sources of uncertainties in the model, observations are needed to improve model prediction through a data assimilation framework. Model inputs are earthquake parameters and topography. This thesis introduces a real-time tsunami forecasting method that combines tsunami model with observations using a hybrid ensemble Kalman filter and ensemble Kalman smoother. The filter is used for state prediction while the smoother operates smoothing to estimate the earthquake parameters. This method reduces the error produced by uncertain inputs. In addition, state-parameter EnKF is implemented to estimate earthquake parameters. Although number of observations is small, estimated parameters generates a better tsunami prediction than the model. Methods and results of prediction experiments in the Red Sea are presented and the prospect of developing an operational tsunami prediction system in the Red Sea is discussed.
DEFF Research Database (Denmark)
Ottesen, Johnny T.; Mehlsen, Jesper; Olufsen, Mette
2014-01-01
in detail on a model predicting baroreflex regulation of heart rate and applied to analysis of data from a rat and healthy humans. Numerous mathematical models have been proposed for prediction of baroreflex regulation of heart rate, yet most of these have been designed to provide qualitative predictions...
Estimation of Parameters in Mean-Reverting Stochastic Systems
Directory of Open Access Journals (Sweden)
Tianhai Tian
2014-01-01
Full Text Available Stochastic differential equation (SDE is a very important mathematical tool to describe complex systems in which noise plays an important role. SDE models have been widely used to study the dynamic properties of various nonlinear systems in biology, engineering, finance, and economics, as well as physical sciences. Since a SDE can generate unlimited numbers of trajectories, it is difficult to estimate model parameters based on experimental observations which may represent only one trajectory of the stochastic model. Although substantial research efforts have been made to develop effective methods, it is still a challenge to infer unknown parameters in SDE models from observations that may have large variations. Using an interest rate model as a test problem, in this work we use the Bayesian inference and Markov Chain Monte Carlo method to estimate unknown parameters in SDE models.
International Nuclear Information System (INIS)
Williams, W.R.; Anderson, J.C.
1995-01-01
The transportation of UF 6 is subject to regulations requiring the evaluation of packaging under a sequence of hypothetical accident conditions including exposure to a 30-min 800 degree C (1475 degree F) fire [10 CFR 71.73(c)(3)]. An issue of continuing interest is whether bare cylinders can withstand such a fire without rupturing. To address this issue, a lumped parameter heat transfer/stress analysis model (6FIRE) has been developed to simulate heating to the point of rupture of a cylinder containing UF 6 when it is exposed to a fire. The model is described, then estimates of time to rupture are presented for various cylinder types, fire temperatures, and fill conditions. An assessment of the quantity of UF 6 released from containment after rupture is also presented. Further documentation of the model is referenced
Directory of Open Access Journals (Sweden)
Ranran Li
2015-09-01
Full Text Available An integrated approach using the inverse method and Bayesian approach, combined with a lake eutrophication water quality model, was developed for parameter estimation and water environmental capacity (WEC analysis. The model was used to support load reduction and effective water quality management in the Taihu Lake system in eastern China. Water quality was surveyed yearly from 1987 to 2010. Total nitrogen (TN and total phosphorus (TP were selected as water quality model variables. Decay rates of TN and TP were estimated using the proposed approach. WECs of TN and TP in 2011 were determined based on the estimated decay rates. Results showed that the historical loading was beyond the WEC, thus, reduction of nitrogen and phosphorus input is necessary to meet water quality goals. Then WEC and allowable discharge capacity (ADC in 2015 and 2020 were predicted. The reduction ratios of ADC during these years were also provided. All of these enable decision makers to assess the influence of each loading and visualize potential load reductions under different water quality goals, and then to formulate a reasonable water quality management strategy.
Li, Ranran; Zou, Zhihong
2015-09-29
An integrated approach using the inverse method and Bayesian approach, combined with a lake eutrophication water quality model, was developed for parameter estimation and water environmental capacity (WEC) analysis. The model was used to support load reduction and effective water quality management in the Taihu Lake system in eastern China. Water quality was surveyed yearly from 1987 to 2010. Total nitrogen (TN) and total phosphorus (TP) were selected as water quality model variables. Decay rates of TN and TP were estimated using the proposed approach. WECs of TN and TP in 2011 were determined based on the estimated decay rates. Results showed that the historical loading was beyond the WEC, thus, reduction of nitrogen and phosphorus input is necessary to meet water quality goals. Then WEC and allowable discharge capacity (ADC) in 2015 and 2020 were predicted. The reduction ratios of ADC during these years were also provided. All of these enable decision makers to assess the influence of each loading and visualize potential load reductions under different water quality goals, and then to formulate a reasonable water quality management strategy.
Ultrasonic data compression via parameter estimation.
Cardoso, Guilherme; Saniie, Jafar
2005-02-01
Ultrasonic imaging in medical and industrial applications often requires a large amount of data collection. Consequently, it is desirable to use data compression techniques to reduce data and to facilitate the analysis and remote access of ultrasonic information. The precise data representation is paramount to the accurate analysis of the shape, size, and orientation of ultrasonic reflectors, as well as to the determination of the properties of the propagation path. In this study, a successive parameter estimation algorithm based on a modified version of the continuous wavelet transform (CWT) to compress and denoise ultrasonic signals is presented. It has been shown analytically that the CWT (i.e., time x frequency representation) yields an exact solution for the time-of-arrival and a biased solution for the center frequency. Consequently, a modified CWT (MCWT) based on the Gabor-Helstrom transform is introduced as a means to exactly estimate both time-of-arrival and center frequency of ultrasonic echoes. Furthermore, the MCWT also has been used to generate a phase x bandwidth representation of the ultrasonic echo. This representation allows the exact estimation of the phase and the bandwidth. The performance of this algorithm for data compression and signal analysis is studied using simulated and experimental ultrasonic signals. The successive parameter estimation algorithm achieves a data compression ratio of (1-5N/J), where J is the number of samples and N is the number of echoes in the signal. For a signal with 10 echoes and 2048 samples, a compression ratio of 96% is achieved with a signal-to-noise ratio (SNR) improvement above 20 dB. Furthermore, this algorithm performs robustly, yields accurate echo estimation, and results in SNR enhancements ranging from 10 to 60 dB for composite signals having SNR as low as -10 dB.
Parameter estimation of an aeroelastic aircraft using neural networks
Indian Academy of Sciences (India)
Application of neural networks to the problem of aerodynamic modelling and parameter estimation for aeroelastic aircraft is addressed. A neural model capable of predicting generalized force and moment coefficients using measured motion and control variables only, without any need for conventional normal elastic ...
Simultaneous estimation of experimental and material parameters
CSIR Research Space (South Africa)
Jansen van Rensburg, GJ
2012-07-01
Full Text Available This conference contribution focusses on the invertibility of non-ideal material tests to accurately determine material parameters. This is done by attempting to model non-ideal test cases and comparing strains as well as force history...
Welter, David E.; White, Jeremy T.; Hunt, Randall J.; Doherty, John E.
2015-09-18
The PEST++ Version 1 object-oriented parameter estimation code is here extended to Version 3 to incorporate additional algorithms and tools to further improve support for large and complex environmental modeling problems. PEST++ Version 3 includes the Gauss-Marquardt-Levenberg (GML) algorithm for nonlinear parameter estimation, Tikhonov regularization, integrated linear-based uncertainty quantification, options of integrated TCP/IP based parallel run management or external independent run management by use of a Version 2 update of the GENIE Version 1 software code, and utilities for global sensitivity analyses. The Version 3 code design is consistent with PEST++ Version 1 and continues to be designed to lower the barriers of entry for users as well as developers while providing efficient and optimized algorithms capable of accommodating large, highly parameterized inverse problems. As such, this effort continues the original focus of (1) implementing the most popular and powerful features of the PEST software suite in a fashion that is easy for novice or experienced modelers to use and (2) developing a software framework that is easy to extend.
Directory of Open Access Journals (Sweden)
Silvierio Rosa
2018-02-01
Full Text Available A state wide Human Respiratory Syncytial Virus (HRSV surveillance system was implemented in Florida in 1999 to support clinical decision-making for prophylaxis of premature infants. The research presented in this paper addresses the problem of fitting real data collected by the Florida HRSV surveillance system by using a periodic SEIRS mathematical model. A sensitivity and cost-effectiveness analysis of the model is done and an optimal control problem is formulated and solved with treatment as the control variable.
Parameter Estimation of Damped Compound Pendulum Differential Evolution Algorithm
Directory of Open Access Journals (Sweden)
Saad Mohd Sazli
2016-01-01
Full Text Available This paper present the parameter identification of damped compound pendulum using differential evolution algorithm. The procedure used to achieve the parameter identification of the experimental system consisted of input output data collection, ARX model order selection and parameter estimation using conventional method least square (LS and differential evolution (DE algorithm. PRBS signal is used to be input signal to regulate the motor speed. Whereas, the output signal is taken from position sensor. Both, input and output data is used to estimate the parameter of the ARX model. The residual error between the actual and predicted output responses of the models is validated using mean squares error (MSE. Analysis showed that, MSE value for LS is 0.0026 and MSE value for DE is 3.6601×10-5. Based results obtained, it was found that DE have lower MSE than the LS method.
Estimation of Physical Parameters in Linear and Nonlinear Dynamic Systems
DEFF Research Database (Denmark)
Knudsen, Morten
for certain input in the time or frequency domain, are emphasised. Consequently, some special techniques are required, in particular for input signal design and model validation. The model structure containing physical parameters is constructed from basic physical laws (mathematical modelling). It is possible......Estimation of physical parameters is an important subclass of system identification. The specific objective is to obtain accurate estimates of the model parameters, while the objective of other aspects of system identification might be to determine a model where other properties, such as responses...... and essential to utilise this physical insight in the input design and validation procedures. This project has two objectives: 1. To develop and apply theories and techniques that are compatible with physical insight and robust to violation of assumptions and approximations, for system identification in general...
Khayet, Mohamed; Fernandez Fernandez, Victoria
2012-01-01
Background Most aerial plant parts are covered with a hydrophobic lipid-rich cuticle, which is the interface between the plant organs and the surrounding environment. Plant surfaces may have a high degree of hydrophobicity because of the combined effects of surface chemistry and roughness. The physical and chemical complexity of the plant cuticle limits the development of models that explain its internal structure and interactions with surface-applied agrochemicals. In this article we int...
Rozos, Evangelos; Akylas, Evangelos; Koussis, Antonis D.
2013-04-01
Slug tests offer a fast and inexpensive means of estimating the hydraulic parameters of a geologic formation, and are very well suited for contaminated site assessment because no water is essentially withdrawn. In the great majority of slug tests performed in wells fully penetrating confined geologic formations, and for over-damped conditions, the response data are evaluated with the transient-flow model of Cooper et al. (1967) when the radial hydraulic conductivity Kr and the coefficient of specific storage Ss are to be estimated. That particular analytical solution, however, is computationally involved and awkward to use. Thus, groundwater professionals often use a few pre-prepared type-curves to fit the data by a rough matching procedure, visually or computationally. On the other hand, the method of Hvorslev (1951), which assumes the flow to be quasi-steady, is much simpler but yields only Kr-estimates. Koussis and Akylas (2012) have derived a complete quasi-steady flow model that includes a storage balance inside the aquifer and allows estimating both Kr and Ss, through matching of the well response data to a (dimensionless) type-curve. That model approximates the model of Cooper et al. closely and has the practical advantage that its solution type-curves are generated very simply, even using an electronic spreadsheet. Thus, an optimal fit of data by a type-curve can be readily embedded in an exhaustive search. That forward procedure, however, is semi-automated; it involves repeated computation of the quasi-steady flow solution, until finding an optimal pair of Kr and Ss values, according to some formal criterion of optimality, or visually. In addition, we have developed a fully automated inverse procedure for estimating the optimal hydraulic formation parameters Kr and Ss. We test and compare these two parameter estimation methods for the slug test and discuss their strengths and weaknesses. Cooper, H. H., Jr., J. D. Bredehoeft and I. S. Papadopulos. 1967
Directory of Open Access Journals (Sweden)
Luan Yihui
2009-09-01
Full Text Available Abstract Background Many aspects of biological functions can be modeled by biological networks, such as protein interaction networks, metabolic networks, and gene coexpression networks. Studying the statistical properties of these networks in turn allows us to infer biological function. Complex statistical network models can potentially more accurately describe the networks, but it is not clear whether such complex models are better suited to find biologically meaningful subnetworks. Results Recent studies have shown that the degree distribution of the nodes is not an adequate statistic in many molecular networks. We sought to extend this statistic with 2nd and 3rd order degree correlations and developed a pseudo-likelihood approach to estimate the parameters. The approach was used to analyze the MIPS and BIOGRID yeast protein interaction networks, and two yeast coexpression networks. We showed that 2nd order degree correlation information gave better predictions of gene interactions in both protein interaction and gene coexpression networks. However, in the biologically important task of predicting functionally homogeneous modules, degree correlation information performs marginally better in the case of the MIPS and BIOGRID protein interaction networks, but worse in the case of gene coexpression networks. Conclusion Our use of dK models showed that incorporation of degree correlations could increase predictive power in some contexts, albeit sometimes marginally, but, in all contexts, the use of third-order degree correlations decreased accuracy. However, it is possible that other parameter estimation methods, such as maximum likelihood, will show the usefulness of incorporating 2nd and 3rd degree correlations in predicting functionally homogeneous modules.
MCMC estimation of multidimensional IRT models
Beguin, Anton; Glas, Cornelis A.W.
1998-01-01
A Bayesian procedure to estimate the three-parameter normal ogive model and a generalization to a model with multidimensional ability parameters are discussed. The procedure is a generalization of a procedure by J. Albert (1992) for estimating the two-parameter normal ogive model. The procedure will
Statistical distributions applications and parameter estimates
Thomopoulos, Nick T
2017-01-01
This book gives a description of the group of statistical distributions that have ample application to studies in statistics and probability. Understanding statistical distributions is fundamental for researchers in almost all disciplines. The informed researcher will select the statistical distribution that best fits the data in the study at hand. Some of the distributions are well known to the general researcher and are in use in a wide variety of ways. Other useful distributions are less understood and are not in common use. The book describes when and how to apply each of the distributions in research studies, with a goal to identify the distribution that best applies to the study. The distributions are for continuous, discrete, and bivariate random variables. In most studies, the parameter values are not known a priori, and sample data is needed to estimate parameter values. In other scenarios, no sample data is available, and the researcher seeks some insight that allows the estimate of ...
Saputro, Dewi Retno Sari; Widyaningsih, Purnami
2017-08-01
In general, the parameter estimation of GWOLR model uses maximum likelihood method, but it constructs a system of nonlinear equations, making it difficult to find the solution. Therefore, an approximate solution is needed. There are two popular numerical methods: the methods of Newton and Quasi-Newton (QN). Newton's method requires large-scale time in executing the computation program since it contains Jacobian matrix (derivative). QN method overcomes the drawback of Newton's method by substituting derivative computation into a function of direct computation. The QN method uses Hessian matrix approach which contains Davidon-Fletcher-Powell (DFP) formula. The Broyden-Fletcher-Goldfarb-Shanno (BFGS) method is categorized as the QN method which has the DFP formula attribute of having positive definite Hessian matrix. The BFGS method requires large memory in executing the program so another algorithm to decrease memory usage is needed, namely Low Memory BFGS (LBFGS). The purpose of this research is to compute the efficiency of the LBFGS method in the iterative and recursive computation of Hessian matrix and its inverse for the GWOLR parameter estimation. In reference to the research findings, we found out that the BFGS and LBFGS methods have arithmetic operation schemes, including O(n2) and O(nm).
Parameter and state estimation in nonlinear dynamical systems
Creveling, Daniel R.
This thesis is concerned with the problem of state and parameter estimation in nonlinear systems. The need to evaluate unknown parameters in models of nonlinear physical, biophysical and engineering systems occurs throughout the development of phenomenological or reduced models of dynamics. When verifying and validating these models, it is important to incorporate information from observations in an efficient manner. Using the idea of synchronization of nonlinear dynamical systems, this thesis develops a framework for presenting data to a candidate model of a physical process in a way that makes efficient use of the measured data while allowing for estimation of the unknown parameters in the model. The approach presented here builds on existing work that uses synchronization as a tool for parameter estimation. Some critical issues of stability in that work are addressed and a practical framework is developed for overcoming these difficulties. The central issue is the choice of coupling strength between the model and data. If the coupling is too strong, the model will reproduce the measured data regardless of the adequacy of the model or correctness of the parameters. If the coupling is too weak, nonlinearities in the dynamics could lead to complex dynamics rendering any cost function comparing the model to the data inadequate for the determination of model parameters. Two methods are introduced which seek to balance the need for coupling with the desire to allow the model to evolve in its natural manner without coupling. One method, 'balanced' synchronization, adds to the synchronization cost function a requirement that the conditional Lyapunov exponents of the model system, conditioned on being driven by the data, remain negative but small in magnitude. Another method allows the coupling between the data and the model to vary in time according to a specific form of differential equation. The coupling dynamics is damped to allow for a tendency toward zero coupling
Rochoux, M. C.; Ricci, S.; Lucor, D.; Cuenot, B.; Trouvé, A.
2014-11-01
This paper is the first part in a series of two articles and presents a data-driven wildfire simulator for forecasting wildfire spread scenarios, at a reduced computational cost that is consistent with operational systems. The prototype simulator features the following components: an Eulerian front propagation solver FIREFLY that adopts a regional-scale modeling viewpoint, treats wildfires as surface propagating fronts, and uses a description of the local rate of fire spread (ROS) as a function of environmental conditions based on Rothermel's model; a series of airborne-like observations of the fire front positions; and a data assimilation (DA) algorithm based on an ensemble Kalman filter (EnKF) for parameter estimation. This stochastic algorithm partly accounts for the nonlinearities between the input parameters of the semi-empirical ROS model and the fire front position, and is sequentially applied to provide a spatially uniform correction to wind and biomass fuel parameters as observations become available. A wildfire spread simulator combined with an ensemble-based DA algorithm is therefore a promising approach to reduce uncertainties in the forecast position of the fire front and to introduce a paradigm-shift in the wildfire emergency response. In order to reduce the computational cost of the EnKF algorithm, a surrogate model based on a polynomial chaos (PC) expansion is used in place of the forward model FIREFLY in the resulting hybrid PC-EnKF algorithm. The performance of EnKF and PC-EnKF is assessed on synthetically generated simple configurations of fire spread to provide valuable information and insight on the benefits of the PC-EnKF approach, as well as on a controlled grassland fire experiment. The results indicate that the proposed PC-EnKF algorithm features similar performance to the standard EnKF algorithm, but at a much reduced computational cost. In particular, the re-analysis and forecast skills of DA strongly relate to the spatial and temporal
Flores, José L.; Karam, Hugo A.; Marques Filho, Edson P.; Pereira Filho, Augusto J.
2016-02-01
The main goal of this paper is to estimate a set of optimal seasonal, daily, and hourly values of atmospheric turbidity and surface radiative <