#### Sample records for parameter estimation approach

1. MCMC for parameters estimation by bayesian approach

International Nuclear Information System (INIS)

Ait Saadi, H.; Ykhlef, F.; Guessoum, A.

2011-01-01

This article discusses the parameter estimation for dynamic system by a Bayesian approach associated with Markov Chain Monte Carlo methods (MCMC). The MCMC methods are powerful for approximating complex integrals, simulating joint distributions, and the estimation of marginal posterior distributions, or posterior means. The MetropolisHastings algorithm has been widely used in Bayesian inference to approximate posterior densities. Calibrating the proposal distribution is one of the main issues of MCMC simulation in order to accelerate the convergence.

2. Parameter Estimation

DEFF Research Database (Denmark)

Sales-Cruz, Mauricio; Heitzig, Martina; Cameron, Ian

2011-01-01

of optimisation techniques coupled with dynamic solution of the underlying model. Linear and nonlinear approaches to parameter estimation are investigated. There is also the application of maximum likelihood principles in the estimation of parameters, as well as the use of orthogonal collocation to generate a set......In this chapter the importance of parameter estimation in model development is illustrated through various applications related to reaction systems. In particular, rate constants in a reaction system are obtained through parameter estimation methods. These approaches often require the application...... of algebraic equations as the basis for parameter estimation.These approaches are illustrated using estimations of kinetic constants from reaction system models....

3. Estimating Soil Hydraulic Parameters using Gradient Based Approach

Science.gov (United States)

Rai, P. K.; Tripathi, S.

2017-12-01

The conventional way of estimating parameters of a differential equation is to minimize the error between the observations and their estimates. The estimates are produced from forward solution (numerical or analytical) of differential equation assuming a set of parameters. Parameter estimation using the conventional approach requires high computational cost, setting-up of initial and boundary conditions, and formation of difference equations in case the forward solution is obtained numerically. Gaussian process based approaches like Gaussian Process Ordinary Differential Equation (GPODE) and Adaptive Gradient Matching (AGM) have been developed to estimate the parameters of Ordinary Differential Equations without explicitly solving them. Claims have been made that these approaches can straightforwardly be extended to Partial Differential Equations; however, it has been never demonstrated. This study extends AGM approach to PDEs and applies it for estimating parameters of Richards equation. Unlike the conventional approach, the AGM approach does not require setting-up of initial and boundary conditions explicitly, which is often difficult in real world application of Richards equation. The developed methodology was applied to synthetic soil moisture data. It was seen that the proposed methodology can estimate the soil hydraulic parameters correctly and can be a potential alternative to the conventional method.

4. A Modified Penalty Parameter Approach for Optimal Estimation of UH with Simultaneous Estimation of Infiltration Parameters

Science.gov (United States)

Bhattacharjya, Rajib Kumar

2018-05-01

The unit hydrograph and the infiltration parameters of a watershed can be obtained from observed rainfall-runoff data by using inverse optimization technique. This is a two-stage optimization problem. In the first stage, the infiltration parameters are obtained and the unit hydrograph ordinates are estimated in the second stage. In order to combine this two-stage method into a single stage one, a modified penalty parameter approach is proposed for converting the constrained optimization problem to an unconstrained one. The proposed approach is designed in such a way that the model initially obtains the infiltration parameters and then searches the optimal unit hydrograph ordinates. The optimization model is solved using Genetic Algorithms. A reduction factor is used in the penalty parameter approach so that the obtained optimal infiltration parameters are not destroyed during subsequent generation of genetic algorithms, required for searching optimal unit hydrograph ordinates. The performance of the proposed methodology is evaluated by using two example problems. The evaluation shows that the model is superior, simple in concept and also has the potential for field application.

5. A variational approach to parameter estimation in ordinary differential equations

Directory of Open Access Journals (Sweden)

Kaschek Daniel

2012-08-01

Full Text Available Abstract Background Ordinary differential equations are widely-used in the field of systems biology and chemical engineering to model chemical reaction networks. Numerous techniques have been developed to estimate parameters like rate constants, initial conditions or steady state concentrations from time-resolved data. In contrast to this countable set of parameters, the estimation of entire courses of network components corresponds to an innumerable set of parameters. Results The approach presented in this work is able to deal with course estimation for extrinsic system inputs or intrinsic reactants, both not being constrained by the reaction network itself. Our method is based on variational calculus which is carried out analytically to derive an augmented system of differential equations including the unconstrained components as ordinary state variables. Finally, conventional parameter estimation is applied to the augmented system resulting in a combined estimation of courses and parameters. Conclusions The combined estimation approach takes the uncertainty in input courses correctly into account. This leads to precise parameter estimates and correct confidence intervals. In particular this implies that small motifs of large reaction networks can be analysed independently of the rest. By the use of variational methods, elements from control theory and statistics are combined allowing for future transfer of methods between the two fields.

6. A variational approach to parameter estimation in ordinary differential equations.

Science.gov (United States)

Kaschek, Daniel; Timmer, Jens

2012-08-14

Ordinary differential equations are widely-used in the field of systems biology and chemical engineering to model chemical reaction networks. Numerous techniques have been developed to estimate parameters like rate constants, initial conditions or steady state concentrations from time-resolved data. In contrast to this countable set of parameters, the estimation of entire courses of network components corresponds to an innumerable set of parameters. The approach presented in this work is able to deal with course estimation for extrinsic system inputs or intrinsic reactants, both not being constrained by the reaction network itself. Our method is based on variational calculus which is carried out analytically to derive an augmented system of differential equations including the unconstrained components as ordinary state variables. Finally, conventional parameter estimation is applied to the augmented system resulting in a combined estimation of courses and parameters. The combined estimation approach takes the uncertainty in input courses correctly into account. This leads to precise parameter estimates and correct confidence intervals. In particular this implies that small motifs of large reaction networks can be analysed independently of the rest. By the use of variational methods, elements from control theory and statistics are combined allowing for future transfer of methods between the two fields.

7. A distributed approach for parameters estimation in System Biology models

International Nuclear Information System (INIS)

Mosca, E.; Merelli, I.; Alfieri, R.; Milanesi, L.

2009-01-01

Due to the lack of experimental measurements, biological variability and experimental errors, the value of many parameters of the systems biology mathematical models is yet unknown or uncertain. A possible computational solution is the parameter estimation, that is the identification of the parameter values that determine the best model fitting respect to experimental data. We have developed an environment to distribute each run of the parameter estimation algorithm on a different computational resource. The key feature of the implementation is a relational database that allows the user to swap the candidate solutions among the working nodes during the computations. The comparison of the distributed implementation with the parallel one showed that the presented approach enables a faster and better parameter estimation of systems biology models.

8. An approach of parameter estimation for non-synchronous systems

International Nuclear Information System (INIS)

Xu Daolin; Lu Fangfang

2005-01-01

Synchronization-based parameter estimation is simple and effective but only available to synchronous systems. To come over this limitation, we propose a technique that the parameters of an unknown physical process (possibly a non-synchronous system) can be identified from a time series via a minimization procedure based on a synchronization control. The feasibility of this approach is illustrated in several chaotic systems

9. A parameter tree approach to estimating system sensitivities to parameter sets

International Nuclear Information System (INIS)

Jarzemba, M.S.; Sagar, B.

2000-01-01

A post-processing technique for determining relative system sensitivity to groups of parameters and system components is presented. It is assumed that an appropriate parametric model is used to simulate system behavior using Monte Carlo techniques and that a set of realizations of system output(s) is available. The objective of our technique is to analyze the input vectors and the corresponding output vectors (that is, post-process the results) to estimate the relative sensitivity of the output to input parameters (taken singly and as a group) and thereby rank them. This technique is different from the design of experimental techniques in that a partitioning of the parameter space is not required before the simulation. A tree structure (which looks similar to an event tree) is developed to better explain the technique. Each limb of the tree represents a particular combination of parameters or a combination of system components. For convenience and to distinguish it from the event tree, we call it the parameter tree. To construct the parameter tree, the samples of input parameter values are treated as either a '+' or a '-' based on whether or not the sampled parameter value is greater than or less than a specified branching criterion (e.g., mean, median, percentile of the population). The corresponding system outputs are also segregated into similar bins. Partitioning the first parameter into a '+' or a '-' bin creates the first level of the tree containing two branches. At the next level, realizations associated with each first-level branch are further partitioned into two bins using the branching criteria on the second parameter and so on until the tree is fully populated. Relative sensitivities are then inferred from the number of samples associated with each branch of the tree. The parameter tree approach is illustrated by applying it to a number of preliminary simulations of the proposed high-level radioactive waste repository at Yucca Mountain, NV. Using a

10. Parameter Estimation of Structural Equation Modeling Using Bayesian Approach

Directory of Open Access Journals (Sweden)

Dewi Kurnia Sari

2016-05-01

Full Text Available Leadership is a process of influencing, directing or giving an example of employees in order to achieve the objectives of the organization and is a key element in the effectiveness of the organization. In addition to the style of leadership, the success of an organization or company in achieving its objectives can also be influenced by the commitment of the organization. Where organizational commitment is a commitment created by each individual for the betterment of the organization. The purpose of this research is to obtain a model of leadership style and organizational commitment to job satisfaction and employee performance, and determine the factors that influence job satisfaction and employee performance using SEM with Bayesian approach. This research was conducted at Statistics FNI employees in Malang, with 15 people. The result of this study showed that the measurement model, all significant indicators measure each latent variable. Meanwhile in the structural model, it was concluded there are a significant difference between the variables of Leadership Style and Organizational Commitment toward Job Satisfaction directly as well as a significant difference between Job Satisfaction on Employee Performance. As for the influence of Leadership Style and variable Organizational Commitment on Employee Performance directly declared insignificant.

11. A coherent structure approach for parameter estimation in Lagrangian Data Assimilation

Science.gov (United States)

Maclean, John; Santitissadeekorn, Naratip; Jones, Christopher K. R. T.

2017-12-01

We introduce a data assimilation method to estimate model parameters with observations of passive tracers by directly assimilating Lagrangian Coherent Structures. Our approach differs from the usual Lagrangian Data Assimilation approach, where parameters are estimated based on tracer trajectories. We employ the Approximate Bayesian Computation (ABC) framework to avoid computing the likelihood function of the coherent structure, which is usually unavailable. We solve the ABC by a Sequential Monte Carlo (SMC) method, and use Principal Component Analysis (PCA) to identify the coherent patterns from tracer trajectory data. Our new method shows remarkably improved results compared to the bootstrap particle filter when the physical model exhibits chaotic advection.

12. Bottom-up modeling approach for the quantitative estimation of parameters in pathogen-host interactions.

Science.gov (United States)

Lehnert, Teresa; Timme, Sandra; Pollmächer, Johannes; Hünniger, Kerstin; Kurzai, Oliver; Figge, Marc Thilo

2015-01-01

Opportunistic fungal pathogens can cause bloodstream infection and severe sepsis upon entering the blood stream of the host. The early immune response in human blood comprises the elimination of pathogens by antimicrobial peptides and innate immune cells, such as neutrophils or monocytes. Mathematical modeling is a predictive method to examine these complex processes and to quantify the dynamics of pathogen-host interactions. Since model parameters are often not directly accessible from experiment, their estimation is required by calibrating model predictions with experimental data. Depending on the complexity of the mathematical model, parameter estimation can be associated with excessively high computational costs in terms of run time and memory. We apply a strategy for reliable parameter estimation where different modeling approaches with increasing complexity are used that build on one another. This bottom-up modeling approach is applied to an experimental human whole-blood infection assay for Candida albicans. Aiming for the quantification of the relative impact of different routes of the immune response against this human-pathogenic fungus, we start from a non-spatial state-based model (SBM), because this level of model complexity allows estimating a priori unknown transition rates between various system states by the global optimization method simulated annealing. Building on the non-spatial SBM, an agent-based model (ABM) is implemented that incorporates the migration of interacting cells in three-dimensional space. The ABM takes advantage of estimated parameters from the non-spatial SBM, leading to a decreased dimensionality of the parameter space. This space can be scanned using a local optimization approach, i.e., least-squares error estimation based on an adaptive regular grid search, to predict cell migration parameters that are not accessible in experiment. In the future, spatio-temporal simulations of whole-blood samples may enable timely

13. A Bayesian approach for parameter estimation and prediction using a computationally intensive model

International Nuclear Information System (INIS)

Higdon, Dave; McDonnell, Jordan D; Schunck, Nicolas; Sarich, Jason; Wild, Stefan M

2015-01-01

Bayesian methods have been successful in quantifying uncertainty in physics-based problems in parameter estimation and prediction. In these cases, physical measurements y are modeled as the best fit of a physics-based model η(θ), where θ denotes the uncertain, best input setting. Hence the statistical model is of the form y=η(θ)+ϵ, where ϵ accounts for measurement, and possibly other, error sources. When nonlinearity is present in η(⋅), the resulting posterior distribution for the unknown parameters in the Bayesian formulation is typically complex and nonstandard, requiring computationally demanding computational approaches such as Markov chain Monte Carlo (MCMC) to produce multivariate draws from the posterior. Although generally applicable, MCMC requires thousands (or even millions) of evaluations of the physics model η(⋅). This requirement is problematic if the model takes hours or days to evaluate. To overcome this computational bottleneck, we present an approach adapted from Bayesian model calibration. This approach combines output from an ensemble of computational model runs with physical measurements, within a statistical formulation, to carry out inference. A key component of this approach is a statistical response surface, or emulator, estimated from the ensemble of model runs. We demonstrate this approach with a case study in estimating parameters for a density functional theory model, using experimental mass/binding energy measurements from a collection of atomic nuclei. We also demonstrate how this approach produces uncertainties in predictions for recent mass measurements obtained at Argonne National Laboratory. (paper)

14. Photogrammetric Resection Approach Using Straight Line Features for Estimation of Cartosat-1 Platform Parameters

Directory of Open Access Journals (Sweden)

Nita H. Shah

2008-08-01

Full Text Available The classical calibration or space resection is the fundamental task in photogrammetry. The lack of sufficient knowledge of interior and exterior orientation parameters lead to unreliable results in the photogrammetric process. There are several other available methods using lines, which consider the determination of exterior orientation parameters, with no mention to the simultaneous determination of inner orientation parameters. Normal space resection methods solve the problem using control points, whose coordinates are known both in image and object reference systems. The non-linearity of the model and the problems, in point location in digital images are the main drawbacks of the classical approaches. The line based approach to overcome these problems includes usage of lines in the number of observations that can be provided, which improve significantly the overall system redundancy. This paper addresses mathematical model relating to both image and object reference system for solving the space resection problem which is generally used for upgrading the exterior orientation parameters. In order to solve the dynamic camera calibration parameters, a sequential estimator (Kalman Filtering is applied; in an iterative process to the image. For dynamic case, e.g. an image sequence of moving objects, a state prediction and a covariance matrix for the next instant is obtained using the available estimates and the system model. Filtered state estimates can be computed from these predicted estimates using the Kalman Filtering approach and basic physical sensor model for each instant of time. The proposed approach is tested with three real data sets and the result suggests that highly accurate space resection parameters can be obtained with or without using the control points and progressive processing time reduction.

15. Sensitivity of Hurst parameter estimation to periodic signals in time series and filtering approaches

Science.gov (United States)

Marković, D.; Koch, M.

2005-09-01

The influence of the periodic signals in time series on the Hurst parameter estimate is investigated with temporal, spectral and time-scale methods. The Hurst parameter estimates of the simulated periodic time series with a white noise background show a high sensitivity on the signal to noise ratio and for some methods, also on the data length used. The analysis is then carried on to the investigation of extreme monthly river flows of the Elbe River (Dresden) and of the Rhine River (Kaub). Effects of removing the periodic components employing different filtering approaches are discussed and it is shown that such procedures are a prerequisite for an unbiased estimation of H. In summary, our results imply that the first step in a time series long-correlation study should be the separation of the deterministic components from the stochastic ones. Otherwise wrong conclusions concerning possible memory effects may be drawn.

16. A probabilistic approach for the estimation of earthquake source parameters from spectral inversion

Science.gov (United States)

Supino, M.; Festa, G.; Zollo, A.

2017-12-01

The amplitude spectrum of a seismic signal related to an earthquake source carries information about the size of the rupture, moment, stress and energy release. Furthermore, it can be used to characterize the Green's function of the medium crossed by the seismic waves. We describe the earthquake amplitude spectrum assuming a generalized Brune's (1970) source model, and direct P- and S-waves propagating in a layered velocity model, characterized by a frequency-independent Q attenuation factor. The observed displacement spectrum depends indeed on three source parameters, the seismic moment (through the low-frequency spectral level), the corner frequency (that is a proxy of the fault length) and the high-frequency decay parameter. These parameters are strongly correlated each other and with the quality factor Q; a rigorous estimation of the associated uncertainties and parameter resolution is thus needed to obtain reliable estimations.In this work, the uncertainties are characterized adopting a probabilistic approach for the parameter estimation. Assuming an L2-norm based misfit function, we perform a global exploration of the parameter space to find the absolute minimum of the cost function and then we explore the cost-function associated joint a-posteriori probability density function around such a minimum, to extract the correlation matrix of the parameters. The global exploration relies on building a Markov chain in the parameter space and on combining a deterministic minimization with a random exploration of the space (basin-hopping technique). The joint pdf is built from the misfit function using the maximum likelihood principle and assuming a Gaussian-like distribution of the parameters. It is then computed on a grid centered at the global minimum of the cost-function. The numerical integration of the pdf finally provides mean, variance and correlation matrix associated with the set of best-fit parameters describing the model. Synthetic tests are performed to

17. A novel optimization approach to estimating kinetic parameters of the enzymatic hydrolysis of corn stover

Directory of Open Access Journals (Sweden)

Fenglei Qi

2016-01-01

Full Text Available Enzymatic hydrolysis is an integral step in the conversion of lignocellulosic biomass to ethanol. The conversion of cellulose to fermentable sugars in the presence of inhibitors is a complex kinetic problem. In this study, we describe a novel approach to estimating the kinetic parameters underlying this process. This study employs experimental data measuring substrate and enzyme loadings, sugar and acid inhibitions for the production of glucose. Multiple objectives to minimize the difference between model predictions and experimental observations are developed and optimized by adopting multi-objective particle swarm optimization method. Model reliability is assessed by exploring likelihood profile in each parameter space. Compared to previous studies, this approach improved the prediction of sugar yields by reducing the mean squared errors by 34% for glucose and 2.7% for cellobiose, suggesting improved agreement between model predictions and the experimental data. Furthermore, kinetic parameters such as K2IG2, K1IG, K2IG, K1IA, and K3IA are identified as contributors to the model non-identifiability and wide parameter confidence intervals. Model reliability analysis indicates possible ways to reduce model non-identifiability and tighten parameter confidence intervals. These results could help improve the design of lignocellulosic biorefineries by providing higher fidelity predictions of fermentable sugars under inhibitory conditions.

18. Bayesian hyper-parameters' approach to joint estimation: the Hubble constant from CMB measurements

Science.gov (United States)

Lahav, O.; Bridle, S. L.; Hobson, M. P.; Lasenby, A. N.; Sodré, L.

2000-07-01

Recently several studies have jointly analysed data from different cosmological probes with the motivation of estimating cosmological parameters. Here we generalize this procedure to allow freedom in the relative weights of various probes. This is done by including in the joint χ2 function a set of hyper-parameters', which are dealt with using Bayesian considerations. The resulting algorithm, which assumes uniform priors on the log of the hyper-parameters, is very simple: instead of minimizing \\sum \\chi_j2 (where \\chi_j2 is per data set j) we propose to minimize \\sum Nj (\\chi_j2) (where Nj is the number of data points per data set j). We illustrate the method by estimating the Hubble constant H0 from different sets of recent cosmic microwave background (CMB) experiments (including Saskatoon, Python V, MSAM1, TOCO and Boomerang). The approach can be generalized for combinations of cosmic probes, and for other priors on the hyper-parameters.

19. Comparison of maximum runup through analytical and numerical approaches for different fault parameters estimates

Science.gov (United States)

Kanoglu, U.; Wronna, M.; Baptista, M. A.; Miranda, J. M. A.

2017-12-01

The one-dimensional analytical runup theory in combination with near shore synthetic waveforms is a promising tool for tsunami rapid early warning systems. Its application in realistic cases with complex bathymetry and initial wave condition from inverse modelling have shown that maximum runup values can be estimated reasonably well. In this study we generate a simplistic bathymetry domains which resemble realistic near-shore features. We investigate the accuracy of the analytical runup formulae to the variation of fault source parameters and near-shore bathymetric features. To do this we systematically vary the fault plane parameters to compute the initial tsunami wave condition. Subsequently, we use the initial conditions to run the numerical tsunami model using coupled system of four nested grids and compare the results to the analytical estimates. Variation of the dip angle of the fault plane showed that analytical estimates have less than 10% difference for angles 5-45 degrees in a simple bathymetric domain. These results shows that the use of analytical formulae for fast run up estimates constitutes a very promising approach in a simple bathymetric domain and might be implemented in Hazard Mapping and Early Warning.

20. Estimating the mechanical competence parameter of the trabecular bone: a neural network approach

Directory of Open Access Journals (Sweden)

Érica Regina Filletti

Full Text Available Abstract Introduction The mechanical competence parameter (MCP of the trabecular bone is a parameter that merges the volume fraction, connectivity, tortuosity and Young modulus of elasticity, to provide a single measure of the trabecular bone structural quality. Methods As the MCP is estimated for 3D images and the Young modulus simulations are quite consuming, in this paper, an alternative approach to estimate the MCP based on artificial neural network (ANN is discussed considering as the training set a group of 23 in vitro vertebrae and 12 distal radius samples obtained by microcomputed tomography (μCT, and 83 in vivo distal radius magnetic resonance image samples (MRI. Results It is shown that the ANN was able to predict with very high accuracy the MCP for 29 new samples, being 6 vertebrae and 3 distal radius bones by μCT and 20 distal radius bone by MRI. Conclusion There is a strong correlation (R2 = 0.97 between both techniques and, despite the small number of testing samples, the Bland-Altman analysis shows that ANN is within the limits of agreement to estimate the MCP.

1. A practical approach to parameter estimation applied to model predicting heart rate regulation

DEFF Research Database (Denmark)

Olufsen, Mette; Ottesen, Johnny T.

2013-01-01

Mathematical models have long been used for prediction of dynamics in biological systems. Recently, several efforts have been made to render these models patient specific. One way to do so is to employ techniques to estimate parameters that enable model based prediction of observed quantities....... Knowledge of variation in parameters within and between groups of subjects have potential to provide insight into biological function. Often it is not possible to estimate all parameters in a given model, in particular if the model is complex and the data is sparse. However, it may be possible to estimate...... a subset of model parameters reducing the complexity of the problem. In this study, we compare three methods that allow identification of parameter subsets that can be estimated given a model and a set of data. These methods will be used to estimate patient specific parameters in a model predicting...

2. A hybrid optimization approach to the estimation of distributed parameters in two-dimensional confined aquifers

Science.gov (United States)

Heidari, M.; Ranjithan, S.R.

1998-01-01

In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is experimentally demonstrated that only one piece of prior information of the least sensitive parameter is sufficient to arrive at the global or near-global optimum solution. For hydraulic head data with measurement errors, the error in the estimation of parameters increases as the standard deviation of the errors increases. Results from our experiments show that, in general, the accuracy of the estimated parameters depends on the level of noise in the hydraulic head data and the initial values used in the truncated-Newton search technique.In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is

3. An estimation of crude oil import demand in Turkey: Evidence from time-varying parameters approach

International Nuclear Information System (INIS)

Ozturk, Ilhan; Arisoy, Ibrahim

2016-01-01

The aim of this study is to model crude oil import demand and estimate the price and income elasticities of imported crude oil in Turkey based on a time-varying parameters (TVP) approach with the aim of obtaining accurate and more robust estimates of price and income elasticities. This study employs annual time series data of domestic oil consumption, real GDP, and oil price for the period 1966–2012. The empirical results indicate that both the income and price elasticities are in line with the theoretical expectations. However, the income elasticity is statistically significant while the price elasticity is statistically insignificant. The relatively high value of income elasticity (1.182) from this study suggests that crude oil import in Turkey is more responsive to changes in income level. This result indicates that imported crude oil is a normal good and rising income levels will foster higher consumption of oil based equipments, vehicles and services by economic agents. The estimated income elasticity of 1.182 suggests that imported crude oil consumption grows at a higher rate than income. This in turn reduces oil intensity over time. Therefore, crude oil import during the estimation period is substantially driven by income. - Highlights: • We estimated the price and income elasticities of imported crude oil in Turkey. • Income elasticity is statistically significant and it is 1.182. • The price elasticity is statistically insignificant. • Crude oil import in Turkey is more responsive to changes in income level. • Crude oil import during the estimation period is substantially driven by income.

4. Estimating distribution parameters of annual maximum streamflows in Johor, Malaysia using TL-moments approach

Science.gov (United States)

Mat Jan, Nur Amalina; Shabri, Ani

2017-01-01

TL-moments approach has been used in an analysis to identify the best-fitting distributions to represent the annual series of maximum streamflow data over seven stations in Johor, Malaysia. The TL-moments with different trimming values are used to estimate the parameter of the selected distributions namely: Three-parameter lognormal (LN3) and Pearson Type III (P3) distribution. The main objective of this study is to derive the TL-moments ( t 1,0), t 1 = 1,2,3,4 methods for LN3 and P3 distributions. The performance of TL-moments ( t 1,0), t 1 = 1,2,3,4 was compared with L-moments through Monte Carlo simulation and streamflow data over a station in Johor, Malaysia. The absolute error is used to test the influence of TL-moments methods on estimated probability distribution functions. From the cases in this study, the results show that TL-moments with four trimmed smallest values from the conceptual sample (TL-moments [4, 0]) of LN3 distribution was the most appropriate in most of the stations of the annual maximum streamflow series in Johor, Malaysia.

5. Parameter Estimation and Prediction of a Nonlinear Storage Model: an algebraic approach

NARCIS (Netherlands)

Doeswijk, T.G.; Keesman, K.J.

2005-01-01

Generally, parameters that are nonlinear in system models are estimated by nonlinear least-squares optimization algorithms. In this paper, if a nonlinear discrete-time model with a polynomial quotient structure in input, output, and parameters, a method is proposed to re-parameterize the model such

6. A Machine Learning Approach to Estimate Riverbank Geotechnical Parameters from Sediment Particle Size Data

Science.gov (United States)

Iwashita, Fabio; Brooks, Andrew; Spencer, John; Borombovits, Daniel; Curwen, Graeme; Olley, Jon

2015-04-01

Assessing bank stability using geotechnical models traditionally involves the laborious collection of data on the bank and floodplain stratigraphy, as well as in-situ geotechnical data for each sedimentary unit within a river bank. The application of geotechnical bank stability models are limited to those sites where extensive field data has been collected, where their ability to provide predictions of bank erosion at the reach scale are limited without a very extensive and expensive field data collection program. Some challenges in the construction and application of riverbank erosion and hydraulic numerical models are their one-dimensionality, steady-state requirements, lack of calibration data, and nonuniqueness. Also, numerical models commonly can be too rigid with respect to detecting unexpected features like the onset of trends, non-linear relations, or patterns restricted to sub-samples of a data set. These shortcomings create the need for an alternate modelling approach capable of using available data. The application of the Self-Organizing Maps (SOM) approach is well-suited to the analysis of noisy, sparse, nonlinear, multidimensional, and scale-dependent data. It is a type of unsupervised artificial neural network with hybrid competitive-cooperative learning. In this work we present a method that uses a database of geotechnical data collected at over 100 sites throughout Queensland State, Australia, to develop a modelling approach that enables geotechnical parameters (soil effective cohesion, friction angle, soil erodibility and critical stress) to be derived from sediment particle size data (PSD). The model framework and predicted values were evaluated using two methods, splitting the dataset into training and validation set, and through a Bootstrap approach. The basis of Bootstrap cross-validation is a leave-one-out strategy. This requires leaving one data value out of the training set while creating a new SOM to estimate that missing value based on the

7. Exploring Alternative Characteristic Curve Approaches to Linking Parameter Estimates from the Generalized Partial Credit Model.

Science.gov (United States)

Roberts, James S.; Bao, Han; Huang, Chun-Wei; Gagne, Phill

Characteristic curve approaches for linking parameters from the generalized partial credit model were examined for cases in which common (anchor) items are calibrated separately in two groups. Three of these approaches are simple extensions of the test characteristic curve (TCC), item characteristic curve (ICC), and operating characteristic curve…

8. Stochastic estimation approach for the evaluation of thermal-hydraulic parameters in pressurized water reactors

International Nuclear Information System (INIS)

1986-01-01

A method based on the extended Kalman filter is developed for the estimation of the core coolant mass flow rate in pressurized water reactors. The need for flow calibration can be avoided by a direct estimation of this parameter. A reduced-order neutronic and thermal-hydraulic model is developed for the Loss-of-Fluid Test (LOFT) reactor. The neutron detector and core-exit coolant temperature signals from the LOFT reactor are used as measurements in the parameter estimation algorithm. The estimation sensitivity to model uncertainties was evaluated using the ambiguity function analysis. This also provides a lower bound on the measurement sample size necessary to achieve a certain estimation accuracy. A sequential technique was developed to minimize the computational effort needed to discretize the continuous time equations, and thus achieve faster convergence to the true parameter value. The performance of the stochastic approximation method was first evaluated using simulated random data, and then applied to the estimation of coolant flow rate using the operational data from the LOFT reactor at 100 and 65% flow rate conditions

9. A continuous wavelet transform approach for harmonic parameters estimation in the presence of impulsive noise

Science.gov (United States)

Dai, Yu; Xue, Yuan; Zhang, Jianxun

2016-01-01

Impulsive noise caused by some random events has the main character of short rise-time and wide frequency spectrum range, so it has the potential to degrade the performance and reliability of the harmonic estimation. This paper focuses on the harmonic estimation procedure based on continuous wavelet transform (CWT) when the analyzed signal is corrupted by the impulsive noise. The digital CWT of both the time-varying sinusoidal signal and the impulsive noise are analyzed, and there are two cross ridges in the time-frequency plane of CWT, which are generated by the signal and the noise separately. In consideration of the amplitude of the noise and the number of the spike event, two inequalities are derived to provide limitations on the wavelet parameters. Based on the amplitude distribution of the noise, the optimal wavelet parameters determined by solving these inequalities are used to suppress the contamination of the noise, as well as increase the amplitude of the ridge corresponding to the signal, so the parameters of each harmonic component can be estimated accurately. The proposed procedure is applied to a numerical simulation and a bone vibration signal test giving satisfactory results of stationary and time-varying harmonic parameter estimation.

10. Approaches in highly parameterized inversion - PEST++, a Parameter ESTimation code optimized for large environmental models

Science.gov (United States)

Welter, David E.; Doherty, John E.; Hunt, Randall J.; Muffels, Christopher T.; Tonkin, Matthew J.; Schreuder, Willem A.

2012-01-01

An object-oriented parameter estimation code was developed to incorporate benefits of object-oriented programming techniques for solving large parameter estimation modeling problems. The code is written in C++ and is a formulation and expansion of the algorithms included in PEST, a widely used parameter estimation code written in Fortran. The new code is called PEST++ and is designed to lower the barriers of entry for users and developers while providing efficient algorithms that can accommodate large, highly parameterized problems. This effort has focused on (1) implementing the most popular features of PEST in a fashion that is easy for novice or experienced modelers to use and (2) creating a software design that is easy to extend; that is, this effort provides a documented object-oriented framework designed from the ground up to be modular and extensible. In addition, all PEST++ source code and its associated libraries, as well as the general run manager source code, have been integrated in the Microsoft Visual Studio® 2010 integrated development environment. The PEST++ code is designed to provide a foundation for an open-source development environment capable of producing robust and efficient parameter estimation tools for the environmental modeling community into the future.

11. Optomechanical parameter estimation

International Nuclear Information System (INIS)

Ang, Shan Zheng; Tsang, Mankei; Harris, Glen I; Bowen, Warwick P

2013-01-01

We propose a statistical framework for the problem of parameter estimation from a noisy optomechanical system. The Cramér–Rao lower bound on the estimation errors in the long-time limit is derived and compared with the errors of radiometer and expectation–maximization (EM) algorithms in the estimation of the force noise power. When applied to experimental data, the EM estimator is found to have the lowest error and follow the Cramér–Rao bound most closely. Our analytic results are envisioned to be valuable to optomechanical experiment design, while the EM algorithm, with its ability to estimate most of the system parameters, is envisioned to be useful for optomechanical sensing, atomic magnetometry and fundamental tests of quantum mechanics. (paper)

12. Combined Yamamoto approach for simultaneous estimation of adsorption isotherm and kinetic parameters in ion-exchange chromatography.

Science.gov (United States)

Rüdt, Matthias; Gillet, Florian; Heege, Stefanie; Hitzler, Julian; Kalbfuss, Bernd; Guélat, Bertrand

2015-09-25

Application of model-based design is appealing to support the development of protein chromatography in the biopharmaceutical industry. However, the required efforts for parameter estimation are frequently perceived as time-consuming and expensive. In order to speed-up this work, a new parameter estimation approach for modelling ion-exchange chromatography in linear conditions was developed. It aims at reducing the time and protein demand for the model calibration. The method combines the estimation of kinetic and thermodynamic parameters based on the simultaneous variation of the gradient slope and the residence time in a set of five linear gradient elutions. The parameters are estimated from a Yamamoto plot and a gradient-adjusted Van Deemter plot. The combined approach increases the information extracted per experiment compared to the individual methods. As a proof of concept, the combined approach was successfully applied for a monoclonal antibody on a cation-exchanger and for a Fc-fusion protein on an anion-exchange resin. The individual parameter estimations for the mAb confirmed that the new approach maintained the accuracy of the usual Yamamoto and Van Deemter plots. In the second case, offline size-exclusion chromatography was performed in order to estimate the thermodynamic parameters of an impurity (high molecular weight species) simultaneously with the main product. Finally, the parameters obtained from the combined approach were used in a lumped kinetic model to simulate the chromatography runs. The simulated chromatograms obtained for a wide range of gradient lengths and residence times showed only small deviations compared to the experimental data. Copyright © 2015 Elsevier B.V. All rights reserved.

13. Estimation of anisotropy parameters in organic-rich shale: Rock physics forward modeling approach

Energy Technology Data Exchange (ETDEWEB)

Herawati, Ida, E-mail: ida.herawati@students.itb.ac.id; Winardhi, Sonny; Priyono, Awali [Mining and Petroleum Engineering Faculty, Institut Teknologi Bandung, Bandung, 40132 (Indonesia)

2015-09-30

Anisotropy analysis becomes an important step in processing and interpretation of seismic data. One of the most important things in anisotropy analysis is anisotropy parameter estimation which can be estimated using well data, core data or seismic data. In seismic data, anisotropy parameter calculation is generally based on velocity moveout analysis. However, the accuracy depends on data quality, available offset, and velocity moveout picking. Anisotropy estimation using seismic data is needed to obtain wide coverage of particular layer anisotropy. In anisotropic reservoir, analysis of anisotropy parameters also helps us to better understand the reservoir characteristics. Anisotropy parameters, especially ε, are related to rock property and lithology determination. Current research aims to estimate anisotropy parameter from seismic data and integrate well data with case study in potential shale gas reservoir. Due to complexity in organic-rich shale reservoir, extensive study from different disciplines is needed to understand the reservoir. Shale itself has intrinsic anisotropy caused by lamination of their formed minerals. In order to link rock physic with seismic response, it is necessary to build forward modeling in organic-rich shale. This paper focuses on studying relationship between reservoir properties such as clay content, porosity and total organic content with anisotropy. Organic content which defines prospectivity of shale gas can be considered as solid background or solid inclusion or both. From the forward modeling result, it is shown that organic matter presence increases anisotropy in shale. The relationships between total organic content and other seismic properties such as acoustic impedance and Vp/Vs are also presented.

14. Estimation of anisotropy parameters in organic-rich shale: Rock physics forward modeling approach

International Nuclear Information System (INIS)

Herawati, Ida; Winardhi, Sonny; Priyono, Awali

2015-01-01

Anisotropy analysis becomes an important step in processing and interpretation of seismic data. One of the most important things in anisotropy analysis is anisotropy parameter estimation which can be estimated using well data, core data or seismic data. In seismic data, anisotropy parameter calculation is generally based on velocity moveout analysis. However, the accuracy depends on data quality, available offset, and velocity moveout picking. Anisotropy estimation using seismic data is needed to obtain wide coverage of particular layer anisotropy. In anisotropic reservoir, analysis of anisotropy parameters also helps us to better understand the reservoir characteristics. Anisotropy parameters, especially ε, are related to rock property and lithology determination. Current research aims to estimate anisotropy parameter from seismic data and integrate well data with case study in potential shale gas reservoir. Due to complexity in organic-rich shale reservoir, extensive study from different disciplines is needed to understand the reservoir. Shale itself has intrinsic anisotropy caused by lamination of their formed minerals. In order to link rock physic with seismic response, it is necessary to build forward modeling in organic-rich shale. This paper focuses on studying relationship between reservoir properties such as clay content, porosity and total organic content with anisotropy. Organic content which defines prospectivity of shale gas can be considered as solid background or solid inclusion or both. From the forward modeling result, it is shown that organic matter presence increases anisotropy in shale. The relationships between total organic content and other seismic properties such as acoustic impedance and Vp/Vs are also presented

15. A study of two estimation approaches for parameters of Weibull distribution based on WPP

International Nuclear Information System (INIS)

Zhang, L.F.; Xie, M.; Tang, L.C.

2007-01-01

Least-squares estimation (LSE) based on Weibull probability plot (WPP) is the most basic method for estimating the Weibull parameters. The common procedure of this method is using the least-squares regression of Y on X, i.e. minimizing the sum of squares of the vertical residuals, to fit a straight line to the data points on WPP and then calculate the LS estimators. This method is known to be biased. In the existing literature the least-squares regression of X on Y, i.e. minimizing the sum of squares of the horizontal residuals, has been used by the Weibull researchers. This motivated us to carry out this comparison between the estimators of the two LS regression methods using intensive Monte Carlo simulations. Both complete and censored data are examined. Surprisingly, the result shows that LS Y on X performs better for small, complete samples, while the LS X on Y performs better in other cases in view of bias of the estimators. The two methods are also compared in terms of other model statistics. In general, when the shape parameter is less than one, LS Y on X provides a better model; otherwise, LS X on Y tends to be better

16. Estimation of the value-at-risk parameter: Econometric analysis and the extreme value theory approach

Directory of Open Access Journals (Sweden)

2006-01-01

Full Text Available In this paper different aspects of value-at-risk estimation are considered. Daily returns of CISCO, INTEL and NASDAQ stock indices are analyzed for period: September 1996 - September 2006. Methods that incorporate time varying variability and heavy tails of the empirical distributions of returns are implemented. The main finding of the paper is that standard econometric methods underestimate the value-at-risk parameter if heavy tails of the empirical distribution are not explicitly taken into account. .

17. Ranking as parameter estimation

Czech Academy of Sciences Publication Activity Database

Kárný, Miroslav; Guy, Tatiana Valentine

2009-01-01

Roč. 4, č. 2 (2009), s. 142-158 ISSN 1745-7645 R&D Projects: GA MŠk 2C06001; GA AV ČR 1ET100750401; GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : ranking * Bayesian estimation * negotiation * modelling Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2009/AS/karny- ranking as parameter estimation.pdf

18. An Inequality Constrained Least-Squares Approach as an Alternative Estimation Procedure for Atmospheric Parameters from VLBI Observations

Science.gov (United States)

Halsig, Sebastian; Artz, Thomas; Iddink, Andreas; Nothnagel, Axel

2016-12-01

On its way through the atmosphere, radio signals are delayed and affected by bending and attenuation effects relative to a theoretical path in vacuum. In particular, the neutral part of the atmosphere contributes considerably to the error budget of space-geodetic observations. At the same time, space-geodetic techniques become more and more important in the understanding of the Earth's atmosphere, because atmospheric parameters can be linked to the water vapor content in the atmosphere. The tropospheric delay is usually taken into account by applying an adequate model for the hydrostatic component and by additionally estimating zenith wet delays for the highly variable wet component. Sometimes, the Ordinary Least Squares (OLS) approach leads to negative estimates, which would be equivalent to negative water vapor in the atmosphere and does, of course, not reflect meteorological and physical conditions in a plausible way. To cope with this phenomenon, we introduce an Inequality Constrained Least Squares (ICLS) method from the field of convex optimization and use inequality constraints to force the tropospheric parameters to be non-negative allowing for a more realistic tropospheric parameter estimation in a meteorological sense. Because deficiencies in the a priori hydrostatic modeling are almost fully compensated by the tropospheric estimates, the ICLS approach urgently requires suitable a priori hydrostatic delays. In this paper, we briefly describe the ICLS method and validate its impact with regard to station positions.

19. Migration of antioxidants from polylactic acid films: A parameter estimation approach and an overview of the current mass transfer models.

Science.gov (United States)

Samsudin, Hayati; Auras, Rafael; Mishra, Dharmendra; Dolan, Kirk; Burgess, Gary; Rubino, Maria; Selke, Susan; Soto-Valdez, Herlinda

2018-01-01

Migration studies of chemicals from contact materials have been widely conducted due to their importance in determining the safety and shelf life of a food product in their packages. The US Food and Drug Administration (FDA) and the European Food Safety Authority (EFSA) require this safety assessment for food contact materials. So, migration experiments are theoretically designed and experimentally conducted to obtain data that can be used to assess the kinetics of chemical release. In this work, a parameter estimation approach was used to review and to determine the mass transfer partition and diffusion coefficients governing the migration process of eight antioxidants from poly(lactic acid), PLA, based films into water/ethanol solutions at temperatures between 20 and 50°C. Scaled sensitivity coefficients were calculated to assess simultaneously estimation of a number of mass transfer parameters. An optimal experimental design approach was performed to show the importance of properly designing a migration experiment. Additional parameters also provide better insights on migration of the antioxidants. For example, the partition coefficients could be better estimated using data from the early part of the experiment instead at the end. Experiments could be conducted for shorter periods of time saving time and resources. Diffusion coefficients of the eight antioxidants from PLA films were between 0.2 and 19×10 -14 m 2 /s at ~40°C. The use of parameter estimation approach provided additional and useful insights about the migration of antioxidants from PLA films. Copyright © 2017 Elsevier Ltd. All rights reserved.

20. Migration of antioxidants from polylactic acid films, a parameter estimation approach: Part I - A model including convective mass transfer coefficient.

Science.gov (United States)

Samsudin, Hayati; Auras, Rafael; Burgess, Gary; Dolan, Kirk; Soto-Valdez, Herlinda

2018-03-01

A two-step solution based on the boundary conditions of Crank's equations for mass transfer in a film was developed. Three driving factors, the diffusion (D), partition (K p,f ) and convective mass transfer coefficients (h), govern the sorption and/or desorption kinetics of migrants from polymer films. These three parameters were simultaneously estimated. They provide in-depth insight into the physics of a migration process. The first step was used to find the combination of D, K p,f and h that minimized the sums of squared errors (SSE) between the predicted and actual results. In step 2, an ordinary least square (OLS) estimation was performed by using the proposed analytical solution containing D, K p,f and h. Three selected migration studies of PLA/antioxidant-based films were used to demonstrate the use of this two-step solution. Additional parameter estimation approaches such as sequential and bootstrap were also performed to acquire a better knowledge about the kinetics of migration. The proposed model successfully provided the initial guesses for D, K p,f and h. The h value was determined without performing a specific experiment for it. By determining h together with D, under or overestimation issues pertaining to a migration process can be avoided since these two parameters are correlated. Copyright © 2017 Elsevier Ltd. All rights reserved.

1. A New Approach to Estimate Forest Parameters Using Dual-Baseline Pol-InSAR Data

Science.gov (United States)

Bai, L.; Hong, W.; Cao, F.; Zhou, Y.

2009-04-01

In POL-InSAR applications using ESPRIT technique, it is assumed that there exist stable scattering centres in the forest. However, the observations in forest severely suffer from volume and temporal decorrelation. The forest scatters are not stable as assumed. The obtained interferometric information is not accurate as expected. Besides, ESPRIT techniques could not identify the interferometric phases corresponding to the ground and the canopy. It provides multiple estimations for the height between two scattering centers due to phase unwrapping. Therefore, estimation errors are introduced to the forest height results. To suppress the two types of errors, we use the dual-baseline POL-InSAR data to estimate forest height. Dual-baseline coherence optimization is applied to obtain interferometric information of stable scattering centers in the forest. From the interferometric phases for different baselines, estimation errors caused by phase unwrapping is solved. Other estimation errors can be suppressed, too. Experiments are done to the ESAR L band POL-InSAR data. Experimental results show the proposed methods provide more accurate forest height than ESPRIT technique.

2. An extended Kalman filter approach to non-stationary Bayesian estimation of reduced-order vocal fold model parameters.

Science.gov (United States)

Hadwin, Paul J; Peterson, Sean D

2017-04-01

The Bayesian framework for parameter inference provides a basis from which subject-specific reduced-order vocal fold models can be generated. Previously, it has been shown that a particle filter technique is capable of producing estimates and associated credibility intervals of time-varying reduced-order vocal fold model parameters. However, the particle filter approach is difficult to implement and has a high computational cost, which can be barriers to clinical adoption. This work presents an alternative estimation strategy based upon Kalman filtering aimed at reducing the computational cost of subject-specific model development. The robustness of this approach to Gaussian and non-Gaussian noise is discussed. The extended Kalman filter (EKF) approach is found to perform very well in comparison with the particle filter technique at dramatically lower computational cost. Based upon the test cases explored, the EKF is comparable in terms of accuracy to the particle filter technique when greater than 6000 particles are employed; if less particles are employed, the EKF actually performs better. For comparable levels of accuracy, the solution time is reduced by 2 orders of magnitude when employing the EKF. By virtue of the approximations used in the EKF, however, the credibility intervals tend to be slightly underpredicted.

3. Markov chain Monte Carlo approach to parameter estimation in the FitzHugh-Nagumo model

DEFF Research Database (Denmark)

Jensen, Anders Christian; Ditlevsen, Susanne; Kessler, Mathieu

2012-01-01

Excitability is observed in a variety of natural systems, such as neuronal dynamics, cardiovascular tissues, or climate dynamics. The stochastic FitzHugh-Nagumo model is a prominent example representing an excitable system. To validate the practical use of a model, the first step is to estimate...

4. ESTIMATION OF PARAMETERS AND RELIABILITY FUNCTION OF EXPONENTIATED EXPONENTIAL DISTRIBUTION: BAYESIAN APPROACH UNDER GENERAL ENTROPY LOSS FUNCTION

Directory of Open Access Journals (Sweden)

Sanjay Kumar Singh

2011-06-01

Full Text Available In this Paper we propose Bayes estimators of the parameters of Exponentiated Exponential distribution and Reliability functions under General Entropy loss function for Type II censored sample. The proposed estimators have been compared with the corresponding Bayes estimators obtained under Squared Error loss function and maximum likelihood estimators for their simulated risks (average loss over sample space.

5. Improved Estimates of Thermodynamic Parameters

Science.gov (United States)

Lawson, D. D.

1982-01-01

Techniques refined for estimating heat of vaporization and other parameters from molecular structure. Using parabolic equation with three adjustable parameters, heat of vaporization can be used to estimate boiling point, and vice versa. Boiling points and vapor pressures for some nonpolar liquids were estimated by improved method and compared with previously reported values. Technique for estimating thermodynamic parameters should make it easier for engineers to choose among candidate heat-exchange fluids for thermochemical cycles.

6. Inflation and cosmological parameter estimation

Energy Technology Data Exchange (ETDEWEB)

Hamann, J.

2007-05-15

In this work, we focus on two aspects of cosmological data analysis: inference of parameter values and the search for new effects in the inflationary sector. Constraints on cosmological parameters are commonly derived under the assumption of a minimal model. We point out that this procedure systematically underestimates errors and possibly biases estimates, due to overly restrictive assumptions. In a more conservative approach, we analyse cosmological data using a more general eleven-parameter model. We find that regions of the parameter space that were previously thought ruled out are still compatible with the data; the bounds on individual parameters are relaxed by up to a factor of two, compared to the results for the minimal six-parameter model. Moreover, we analyse a class of inflation models, in which the slow roll conditions are briefly violated, due to a step in the potential. We show that the presence of a step generically leads to an oscillating spectrum and perform a fit to CMB and galaxy clustering data. We do not find conclusive evidence for a step in the potential and derive strong bounds on quantities that parameterise the step. (orig.)

7. Estimating Risk Parameters

OpenAIRE

Aswath Damodaran

1999-01-01

Over the last three decades, the capital asset pricing model has occupied a central and often controversial place in most corporate finance analysts’ tool chests. The model requires three inputs to compute expected returns – a riskfree rate, a beta for an asset and an expected risk premium for the market portfolio (over and above the riskfree rate). Betas are estimated, by most practitioners, by regressing returns on an asset against a stock index, with the slope of the regression being the b...

8. Adaptive approach for on-board impedance parameters and voltage estimation of lithium-ion batteries in electric vehicles

Science.gov (United States)

Farmann, Alexander; Waag, Wladislaw; Sauer, Dirk Uwe

2015-12-01

Robust algorithms using reduced order equivalent circuit model (ECM) for an accurate and reliable estimation of battery states in various applications become more popular. In this study, a novel adaptive, self-learning heuristic algorithm for on-board impedance parameters and voltage estimation of lithium-ion batteries (LIBs) in electric vehicles is introduced. The presented approach is verified using LIBs with different composition of chemistries (NMC/C, NMC/LTO, LFP/C) at different aging states. An impedance-based reduced order ECM incorporating ohmic resistance and a combination of a constant phase element and a resistance (so-called ZARC-element) is employed. Existing algorithms in vehicles are much more limited in the complexity of the ECMs. The algorithm is validated using seven day real vehicle data with high temperature variation including very low temperatures (from -20 °C to +30 °C) at different Depth-of-Discharges (DoDs). Two possibilities to approximate both ZARC-elements with finite number of RC-elements on-board are shown and the results of the voltage estimation are compared. Moreover, the current dependence of the charge-transfer resistance is considered by employing Butler-Volmer equation. Achieved results indicate that both models yield almost the same grade of accuracy.

9. Parameter estimation in plasmonic QED

Science.gov (United States)

Jahromi, H. Rangani

2018-03-01

We address the problem of parameter estimation in the presence of plasmonic modes manipulating emitted light via the localized surface plasmons in a plasmonic waveguide at the nanoscale. The emitter that we discuss is the nitrogen vacancy centre (NVC) in diamond modelled as a qubit. Our goal is to estimate the β factor measuring the fraction of emitted energy captured by waveguide surface plasmons. The best strategy to obtain the most accurate estimation of the parameter, in terms of the initial state of the probes and different control parameters, is investigated. In particular, for two-qubit estimation, it is found although we may achieve the best estimation at initial instants by using the maximally entangled initial states, at long times, the optimal estimation occurs when the initial state of the probes is a product one. We also find that decreasing the interqubit distance or increasing the propagation length of the plasmons improve the precision of the estimation. Moreover, decrease of spontaneous emission rate of the NVCs retards the quantum Fisher information (QFI) reduction and therefore the vanishing of the QFI, measuring the precision of the estimation, is delayed. In addition, if the phase parameter of the initial state of the two NVCs is equal to πrad, the best estimation with the two-qubit system is achieved when initially the NVCs are maximally entangled. Besides, the one-qubit estimation has been also analysed in detail. Especially, we show that, using a two-qubit probe, at any arbitrary time, enhances considerably the precision of estimation in comparison with one-qubit estimation.

10. Application of spreadsheet to estimate infiltration parameters

Directory of Open Access Journals (Sweden)

2016-09-01

Full Text Available Infiltration is the process of flow of water into the ground through the soil surface. Soil water although contributes a negligible fraction of total water present on earth surface, but is of utmost importance for plant life. Estimation of infiltration rates is of paramount importance for estimation of effective rainfall, groundwater recharge, and designing of irrigation systems. Numerous infiltration models are in use for estimation of infiltration rates. The conventional graphical approach for estimation of infiltration parameters often fails to estimate the infiltration parameters precisely. The generalised reduced gradient (GRG solver is reported to be a powerful tool for estimating parameters of nonlinear equations and it has, therefore, been implemented to estimate the infiltration parameters in the present paper. Field data of infiltration rate available in literature for sandy loam soils of Umuahia, Nigeria were used to evaluate the performance of GRG solver. A comparative study of graphical method and GRG solver shows that the performance of GRG solver is better than that of conventional graphical method for estimation of infiltration rates. Further, the performance of Kostiakov model has been found to be better than the Horton and Philip's model in most of the cases based on both the approaches of parameter estimation.

11. Parameter Estimation of Partial Differential Equation Models.

Science.gov (United States)

Xun, Xiaolei; Cao, Jiguo; Mallick, Bani; Carroll, Raymond J; Maity, Arnab

2013-01-01

Partial differential equation (PDE) models are commonly used to model complex dynamic systems in applied sciences such as biology and finance. The forms of these PDE models are usually proposed by experts based on their prior knowledge and understanding of the dynamic system. Parameters in PDE models often have interesting scientific interpretations, but their values are often unknown, and need to be estimated from the measurements of the dynamic system in the present of measurement errors. Most PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus the computational load is high. In this article, we propose two methods to estimate parameters in PDE models: a parameter cascading method and a Bayesian approach. In both methods, the underlying dynamic process modeled with the PDE model is represented via basis function expansion. For the parameter cascading method, we develop two nested levels of optimization to estimate the PDE parameters. For the Bayesian method, we develop a joint model for data and the PDE, and develop a novel hierarchical model allowing us to employ Markov chain Monte Carlo (MCMC) techniques to make posterior inference. Simulation studies show that the Bayesian method and parameter cascading method are comparable, and both outperform other available methods in terms of estimation accuracy. The two methods are demonstrated by estimating parameters in a PDE model from LIDAR data.

12. Sensor Placement for Modal Parameter Subset Estimation

DEFF Research Database (Denmark)

Ulriksen, Martin Dalgaard; Bernal, Dionisio; Damkilde, Lars

2016-01-01

The present paper proposes an approach for deciding on sensor placements in the context of modal parameter estimation from vibration measurements. The approach is based on placing sensors, of which the amount is determined a priori, such that the minimum Fisher information that the frequency resp...

13. Load Estimation from Modal Parameters

DEFF Research Database (Denmark)

Aenlle, Manuel López; Brincker, Rune; Fernández, Pelayo Fernández

2007-01-01

In Natural Input Modal Analysis the modal parameters are estimated just from the responses while the loading is not recorded. However, engineers are sometimes interested in knowing some features of the loading acting on a structure. In this paper, a procedure to determine the loading from a FRF m...

14. Parameter estimation and inverse problems

CERN Document Server

Aster, Richard C; Thurber, Clifford H

2005-01-01

Parameter Estimation and Inverse Problems primarily serves as a textbook for advanced undergraduate and introductory graduate courses. Class notes have been developed and reside on the World Wide Web for faciliting use and feedback by teaching colleagues. The authors'' treatment promotes an understanding of fundamental and practical issus associated with parameter fitting and inverse problems including basic theory of inverse problems, statistical issues, computational issues, and an understanding of how to analyze the success and limitations of solutions to these probles. The text is also a practical resource for general students and professional researchers, where techniques and concepts can be readily picked up on a chapter-by-chapter basis.Parameter Estimation and Inverse Problems is structured around a course at New Mexico Tech and is designed to be accessible to typical graduate students in the physical sciences who may not have an extensive mathematical background. It is accompanied by a Web site that...

15. Output-Only Modal Parameter Recursive Estimation of Time-Varying Structures via a Kernel Ridge Regression FS-TARMA Approach

Directory of Open Access Journals (Sweden)

Zhi-Sai Ma

2017-01-01

Full Text Available Modal parameter estimation plays an important role in vibration-based damage detection and is worth more attention and investigation, as changes in modal parameters are usually being used as damage indicators. This paper focuses on the problem of output-only modal parameter recursive estimation of time-varying structures based upon parameterized representations of the time-dependent autoregressive moving average (TARMA. A kernel ridge regression functional series TARMA (FS-TARMA recursive identification scheme is proposed and subsequently employed for the modal parameter estimation of a numerical three-degree-of-freedom time-varying structural system and a laboratory time-varying structure consisting of a simply supported beam and a moving mass sliding on it. The proposed method is comparatively assessed against an existing recursive pseudolinear regression FS-TARMA approach via Monte Carlo experiments and shown to be capable of accurately tracking the time-varying dynamics in a recursive manner.

16. A least-squares minimization approach for model parameters estimate by using a new magnetic anomaly formula

Science.gov (United States)

Abo-Ezz, E. R.; Essa, K. S.

2016-04-01

A new linear least-squares approach is proposed to interpret magnetic anomalies of the buried structures by using a new magnetic anomaly formula. This approach depends on solving different sets of algebraic linear equations in order to invert the depth ( z), amplitude coefficient ( K), and magnetization angle ( θ) of buried structures using magnetic data. The utility and validity of the new proposed approach has been demonstrated through various reliable synthetic data sets with and without noise. In addition, the method has been applied to field data sets from USA and India. The best-fitted anomaly has been delineated by estimating the root-mean squared (rms). Judging satisfaction of this approach is done by comparing the obtained results with other available geological or geophysical information.

17. Treatment simulation approaches for the estimation of the distributions of treatment quality parameters generated by geometrical uncertainties

International Nuclear Information System (INIS)

Baum, C; Alber, M; Birkner, M; Nuesslin, F

2004-01-01

Geometric uncertainties arise during treatment planning and treatment and mean that dose-dependent parameters such as EUD are random variables with a patient specific probability distribution. Treatment planning with highly conformal treatment techniques such as intensity modulated radiation therapy requires new evaluation tools which allow us to estimate this influence of geometrical uncertainties on the probable treatment dose for a planned dose distribution. Monte Carlo simulations of treatment courses with recalculation of the dose according to the daily geometric errors are a gold standard for such an evaluation. Distribution histograms which show the relative frequency of a treatment quality parameter in the treatment simulations can be used to evaluate the potential risks and chances of a planned dose distribution. As treatment simulations with dose recalculation are very time consuming for sufficient statistical accuracy, it is proposed to do treatment simulations in the dose parameter space where the result is mainly determined by the systematic and random component of the geometrical uncertainties. Comparison of the parameter space simulation method with the gold standard for prostate cases and a head and neck case shows good agreement as long as the number of fractions is high enough and the influence of tissue inhomogeneities and surface curvature on the dose is small

18. A statistical approach to the estimation of mechanical unfolding parameters from the unfolding patterns of protein heteropolymers

International Nuclear Information System (INIS)

Beddard, G S; Brockwell, D J

2010-01-01

A statistical calculation is described with which the saw-tooth-like unfolding patterns of concatenated heteropolymeric proteins can be used to estimate the forced unfolding parameters of a previously uncharacterized protein. The chance of observing the various sequences of unfolding events, such as ABAABBB or BBAAABB etc, for two proteins of types A and B is calculated using proteins with various ratios of A and B and at different values of effective unfolding rate constants. If the experimental rate constant for forced unfolding, k 0 , and distance to the transition state x u are known for one protein, then the calculation allows an estimation of values for the other. The predictions are compared with Monte Carlo simulations and experimental data. (communication)

19. A new approach to the joined estimation of the heat generated by a semicontiunuous emulsion polymerization Qr and the overall heat exchange parameter UA

Directory of Open Access Journals (Sweden)

Freire F. B.

2004-01-01

Full Text Available This work is concerned with the coupled estimation of the heat generated by the reaction (Qr and the overall heat transfer parameter (UA during the terpolymerization of styrene, butyl acrylate and methyl methacrylate from temperature measurements and the reactor heat balance. By making specific assumptions about the dynamics of the evolution of UA and Q R, we propose a cascade of observers to successively estimate these two parameters without the need for additional measurements of on-line samples. One further aspect of our approach is that only the energy balance around the reactor was employed. It means that the flow rate of the cooling jacket fluid was not required.

20. Estimation of fundamental kinetic parameters of polyhydroxybutyrate fermentation process of Azohydromonas australica using statistical approach of media optimization.

Science.gov (United States)

Gahlawat, Geeta; Srivastava, Ashok K

2012-11-01

Polyhydroxybutyrate or PHB is a biodegradable and biocompatible thermoplastic with many interesting applications in medicine, food packaging, and tissue engineering materials. The present study deals with the enhanced production of PHB by Azohydromonas australica using sucrose and the estimation of fundamental kinetic parameters of PHB fermentation process. The preliminary culture growth inhibition studies were followed by statistical optimization of medium recipe using response surface methodology to increase the PHB production. Later on batch cultivation in a 7-L bioreactor was attempted using optimum concentration of medium components (process variables) obtained from statistical design to identify the batch growth and product kinetics parameters of PHB fermentation. A. australica exhibited a maximum biomass and PHB concentration of 8.71 and 6.24 g/L, respectively in bioreactor with an overall PHB production rate of 0.75 g/h. Bioreactor cultivation studies demonstrated that the specific biomass and PHB yield on sucrose was 0.37 and 0.29 g/g, respectively. The kinetic parameters obtained in the present investigation would be used in the development of a batch kinetic mathematical model for PHB production which will serve as launching pad for further process optimization studies, e.g., design of several bioreactor cultivation strategies to further enhance the biopolymer production.

1. Estimating RASATI scores using acoustical parameters

International Nuclear Information System (INIS)

Agüero, P D; Tulli, J C; Moscardi, G; Gonzalez, E L; Uriz, A J

2011-01-01

Acoustical analysis of speech using computers has reached an important development in the latest years. The subjective evaluation of a clinician is complemented with an objective measure of relevant parameters of voice. Praat, MDVP (Multi Dimensional Voice Program) and SAV (Software for Voice Analysis) are some examples of software for speech analysis. This paper describes an approach to estimate the subjective characteristics of RASATI scale given objective acoustical parameters. Two approaches were used: linear regression with non-negativity constraints, and neural networks. The experiments show that such approach gives correct evaluations with ±1 error in 80% of the cases.

2. Parameter Estimation of Partial Differential Equation Models

KAUST Repository

Xun, Xiaolei

2013-09-01

Partial differential equation (PDE) models are commonly used to model complex dynamic systems in applied sciences such as biology and finance. The forms of these PDE models are usually proposed by experts based on their prior knowledge and understanding of the dynamic system. Parameters in PDE models often have interesting scientific interpretations, but their values are often unknown and need to be estimated from the measurements of the dynamic system in the presence of measurement errors. Most PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus the computational load is high. In this article, we propose two methods to estimate parameters in PDE models: a parameter cascading method and a Bayesian approach. In both methods, the underlying dynamic process modeled with the PDE model is represented via basis function expansion. For the parameter cascading method, we develop two nested levels of optimization to estimate the PDE parameters. For the Bayesian method, we develop a joint model for data and the PDE and develop a novel hierarchical model allowing us to employ Markov chain Monte Carlo (MCMC) techniques to make posterior inference. Simulation studies show that the Bayesian method and parameter cascading method are comparable, and both outperform other available methods in terms of estimation accuracy. The two methods are demonstrated by estimating parameters in a PDE model from long-range infrared light detection and ranging data. Supplementary materials for this article are available online. © 2013 American Statistical Association.

3. A self-organizing state-space-model approach for parameter estimation in hodgkin-huxley-type models of single neurons.

Directory of Open Access Journals (Sweden)

Dimitrios V Vavoulis

Full Text Available Traditional approaches to the problem of parameter estimation in biophysical models of neurons and neural networks usually adopt a global search algorithm (for example, an evolutionary algorithm, often in combination with a local search method (such as gradient descent in order to minimize the value of a cost function, which measures the discrepancy between various features of the available experimental data and model output. In this study, we approach the problem of parameter estimation in conductance-based models of single neurons from a different perspective. By adopting a hidden-dynamical-systems formalism, we expressed parameter estimation as an inference problem in these systems, which can then be tackled using a range of well-established statistical inference methods. The particular method we used was Kitagawa's self-organizing state-space model, which was applied on a number of Hodgkin-Huxley-type models using simulated or actual electrophysiological data. We showed that the algorithm can be used to estimate a large number of parameters, including maximal conductances, reversal potentials, kinetics of ionic currents, measurement and intrinsic noise, based on low-dimensional experimental data and sufficiently informative priors in the form of pre-defined constraints imposed on model parameters. The algorithm remained operational even when very noisy experimental data were used. Importantly, by combining the self-organizing state-space model with an adaptive sampling algorithm akin to the Covariance Matrix Adaptation Evolution Strategy, we achieved a significant reduction in the variance of parameter estimates. The algorithm did not require the explicit formulation of a cost function and it was straightforward to apply on compartmental models and multiple data sets. Overall, the proposed methodology is particularly suitable for resolving high-dimensional inference problems based on noisy electrophysiological data and, therefore, a

4. Bayesian parameter estimation in probabilistic risk assessment

International Nuclear Information System (INIS)

Siu, Nathan O.; Kelly, Dana L.

1998-01-01

Bayesian statistical methods are widely used in probabilistic risk assessment (PRA) because of their ability to provide useful estimates of model parameters when data are sparse and because the subjective probability framework, from which these methods are derived, is a natural framework to address the decision problems motivating PRA. This paper presents a tutorial on Bayesian parameter estimation especially relevant to PRA. It summarizes the philosophy behind these methods, approaches for constructing likelihood functions and prior distributions, some simple but realistic examples, and a variety of cautions and lessons regarding practical applications. References are also provided for more in-depth coverage of various topics

5. Multi-Parameter Estimation for Orthorhombic Media

KAUST Repository

Masmoudi, Nabil; Alkhalifah, Tariq Ali

2015-01-01

Building reliable anisotropy models is crucial in seismic modeling, imaging and full waveform inversion. However, estimating anisotropy parameters is often hampered by the trade off between inhomogeneity and anisotropy. For instance, one way to estimate the anisotropy parameters is to relate them analytically to traveltimes, which is challenging in inhomogeneous media. Using perturbation theory, we develop travel-time approximations for orthorhombic media as explicit functions of the anellipticity parameters η1, η2 and a parameter Δγ in inhomogeneous background media. Specifically, our expansion assumes inhomogeneous ellipsoidal anisotropic background model, which can be obtained from well information and stacking velocity analysis. This approach has two main advantages: in one hand, it provides a computationally efficient tool to solve the orthorhombic eikonal equation, on the other hand, it provides a mechanism to scan for the best fitting anisotropy parameters without the need for repetitive modeling of traveltimes, because the coefficients of the traveltime expansion are independent of the perturbed parameters. Furthermore, the coefficients of the traveltime expansion provide insights on the sensitivity of the traveltime with respect to the perturbed parameters. We show the accuracy of the traveltime approximations as well as an approach for multi-parameter scanning in orthorhombic media.

6. Multi-Parameter Estimation for Orthorhombic Media

KAUST Repository

Masmoudi, Nabil

2015-08-19

Building reliable anisotropy models is crucial in seismic modeling, imaging and full waveform inversion. However, estimating anisotropy parameters is often hampered by the trade off between inhomogeneity and anisotropy. For instance, one way to estimate the anisotropy parameters is to relate them analytically to traveltimes, which is challenging in inhomogeneous media. Using perturbation theory, we develop travel-time approximations for orthorhombic media as explicit functions of the anellipticity parameters η1, η2 and a parameter Δγ in inhomogeneous background media. Specifically, our expansion assumes inhomogeneous ellipsoidal anisotropic background model, which can be obtained from well information and stacking velocity analysis. This approach has two main advantages: in one hand, it provides a computationally efficient tool to solve the orthorhombic eikonal equation, on the other hand, it provides a mechanism to scan for the best fitting anisotropy parameters without the need for repetitive modeling of traveltimes, because the coefficients of the traveltime expansion are independent of the perturbed parameters. Furthermore, the coefficients of the traveltime expansion provide insights on the sensitivity of the traveltime with respect to the perturbed parameters. We show the accuracy of the traveltime approximations as well as an approach for multi-parameter scanning in orthorhombic media.

7. Applied parameter estimation for chemical engineers

CERN Document Server

Englezos, Peter

2000-01-01

Formulation of the parameter estimation problem; computation of parameters in linear models-linear regression; Gauss-Newton method for algebraic models; other nonlinear regression methods for algebraic models; Gauss-Newton method for ordinary differential equation (ODE) models; shortcut estimation methods for ODE models; practical guidelines for algorithm implementation; constrained parameter estimation; Gauss-Newton method for partial differential equation (PDE) models; statistical inferences; design of experiments; recursive parameter estimation; parameter estimation in nonlinear thermodynam

8. Data Handling and Parameter Estimation

DEFF Research Database (Denmark)

Sin, Gürkan; Gernaey, Krist

2016-01-01

,engineers, and professionals. However, it is also expected that they will be useful both for graduate teaching as well as a stepping stone for academic researchers who wish to expand their theoretical interest in the subject. For the models selected to interpret the experimental data, this chapter uses available models from...... literature that are mostly based on the ActivatedSludge Model (ASM) framework and their appropriate extensions (Henze et al., 2000).The chapter presents an overview of the most commonly used methods in the estimation of parameters from experimental batch data, namely: (i) data handling and validation, (ii......Modelling is one of the key tools at the disposal of modern wastewater treatment professionals, researchers and engineers. It enables them to study and understand complex phenomena underlying the physical, chemical and biological performance of wastewater treatment plants at different temporal...

9. Precision Parameter Estimation and Machine Learning

Science.gov (United States)

Wandelt, Benjamin D.

2008-12-01

I discuss the strategy of Acceleration by Parallel Precomputation and Learning'' (AP-PLe) that can vastly accelerate parameter estimation in high-dimensional parameter spaces and costly likelihood functions, using trivially parallel computing to speed up sequential exploration of parameter space. This strategy combines the power of distributed computing with machine learning and Markov-Chain Monte Carlo techniques efficiently to explore a likelihood function, posterior distribution or χ2-surface. This strategy is particularly successful in cases where computing the likelihood is costly and the number of parameters is moderate or large. We apply this technique to two central problems in cosmology: the solution of the cosmological parameter estimation problem with sufficient accuracy for the Planck data using PICo; and the detailed calculation of cosmological helium and hydrogen recombination with RICO. Since the APPLe approach is designed to be able to use massively parallel resources to speed up problems that are inherently serial, we can bring the power of distributed computing to bear on parameter estimation problems. We have demonstrated this with the CosmologyatHome project.

10. Methodical Approach to Estimation of Energy Efficiency Parameters of the Economy Under the Structural Changes in the Fuel And Energy Balance (on the Example of Baikal Region

Directory of Open Access Journals (Sweden)

Boris Grigorievich Saneev

2013-12-01

Full Text Available The authors consider a methodical approach which allows estimating energy efficiency parameters of the region’s economy using a fuel and energy balance (FEB. This approach was tested on the specific case of Baikal region. During the testing process the authors have developed ex ante and ex post FEBs and estimated energy efficiency parameters such as energy-, electro- and heat capacity of GRP, coefficients of useful utilization of fuel and energy resources and a monetary version of FEB. Forecast estimations are based on assumptions and limitations of technologically-intensive development scenario of the region. Authors show that the main factor of structural changes in the fuel and energy balance will be the large-scale development of hydrocarbon resources in Baikal region. It will cause structural changes in the composition of both the debit and credit of FEB (namely the structure of export and final consumption of fuel and energy resources. Authors assume that the forecast structural changes of the region’s FEB will significantly improve energy efficiency parameters of the economy: energy capacity of GRP will decrease by 1,5 times in 2010– 2030, electro and heat capacity – 1,9 times; coefficients of useful utilization of fuel and energy resources will increase by 3–5 p.p. This will save about 20 million tons of fuel equivalent (about 210 billion rubles in 2011 the prices until 2030

11. Estimating physiological skin parameters from hyperspectral signatures

Science.gov (United States)

Vyas, Saurabh; Banerjee, Amit; Burlina, Philippe

2013-05-01

We describe an approach for estimating human skin parameters, such as melanosome concentration, collagen concentration, oxygen saturation, and blood volume, using hyperspectral radiometric measurements (signatures) obtained from in vivo skin. We use a computational model based on Kubelka-Munk theory and the Fresnel equations. This model forward maps the skin parameters to a corresponding multiband reflectance spectra. Machine-learning-based regression is used to generate the inverse map, and hence estimate skin parameters from hyperspectral signatures. We test our methods using synthetic and in vivo skin signatures obtained in the visible through the short wave infrared domains from 24 patients of both genders and Caucasian, Asian, and African American ethnicities. Performance validation shows promising results: good agreement with the ground truth and well-established physiological precepts. These methods have potential use in the characterization of skin abnormalities and in minimally-invasive prescreening of malignant skin cancers.

12. Robust estimation of hydrological model parameters

Directory of Open Access Journals (Sweden)

A. Bárdossy

2008-11-01

Full Text Available The estimation of hydrological model parameters is a challenging task. With increasing capacity of computational power several complex optimization algorithms have emerged, but none of the algorithms gives a unique and very best parameter vector. The parameters of fitted hydrological models depend upon the input data. The quality of input data cannot be assured as there may be measurement errors for both input and state variables. In this study a methodology has been developed to find a set of robust parameter vectors for a hydrological model. To see the effect of observational error on parameters, stochastically generated synthetic measurement errors were applied to observed discharge and temperature data. With this modified data, the model was calibrated and the effect of measurement errors on parameters was analysed. It was found that the measurement errors have a significant effect on the best performing parameter vector. The erroneous data led to very different optimal parameter vectors. To overcome this problem and to find a set of robust parameter vectors, a geometrical approach based on Tukey's half space depth was used. The depth of the set of N randomly generated parameters was calculated with respect to the set with the best model performance (Nash-Sutclife efficiency was used for this study for each parameter vector. Based on the depth of parameter vectors, one can find a set of robust parameter vectors. The results show that the parameters chosen according to the above criteria have low sensitivity and perform well when transfered to a different time period. The method is demonstrated on the upper Neckar catchment in Germany. The conceptual HBV model was used for this study.

13. Effect of primary and secondary parameters on analytical estimation of effective thermal conductivity of two phase materials using unit cell approach

Science.gov (United States)

S, Chidambara Raja; P, Karthikeyan; Kumaraswamidhas, L. A.; M, Ramu

2018-05-01

Most of the thermal design systems involve two phase materials and analysis of such systems requires detailed understanding of the thermal characteristics of the two phase material. This article aimed to develop geometry dependent unit cell approach model by considering the effects of all primary parameters (conductivity ratio and concentration) and secondary parameters (geometry, contact resistance, natural convection, Knudsen and radiation) for the estimation of effective thermal conductivity of two-phase materials. The analytical equations have been formulated based on isotherm approach for 2-D and 3-D spatially periodic medium. The developed models are validated with standard models and suited for all kind of operating conditions. The results have shown substantial improvement compared to the existing models and are in good agreement with the experimental data.

14. Nonparametric estimation of location and scale parameters

KAUST Repository

Potgieter, C.J.

2012-12-01

Two random variables X and Y belong to the same location-scale family if there are constants μ and σ such that Y and μ+σX have the same distribution. In this paper we consider non-parametric estimation of the parameters μ and σ under minimal assumptions regarding the form of the distribution functions of X and Y. We discuss an approach to the estimation problem that is based on asymptotic likelihood considerations. Our results enable us to provide a methodology that can be implemented easily and which yields estimators that are often near optimal when compared to fully parametric methods. We evaluate the performance of the estimators in a series of Monte Carlo simulations. © 2012 Elsevier B.V. All rights reserved.

15. Parameter estimation of a three-axis spacecraft simulator using recursive least-squares approach with tracking differentiator and Extended Kalman Filter

Science.gov (United States)

Xu, Zheyao; Qi, Naiming; Chen, Yukun

2015-12-01

Spacecraft simulators are widely used to study the dynamics, guidance, navigation, and control of a spacecraft on the ground. A spacecraft simulator can have three rotational degrees of freedom by using a spherical air-bearing to simulate a frictionless and micro-gravity space environment. The moment of inertia and center of mass are essential for control system design of ground-based three-axis spacecraft simulators. Unfortunately, they cannot be known precisely. This paper presents two approaches, i.e. a recursive least-squares (RLS) approach with tracking differentiator (TD) and Extended Kalman Filter (EKF) method, to estimate inertia parameters. The tracking differentiator (TD) filter the noise coupled with the measured signals and generate derivate of the measured signals. Combination of two TD filters in series obtains the angular accelerations that are required in RLS (TD-TD-RLS). Another method that does not need to estimate the angular accelerations is using the integrated form of dynamics equation. An extended TD (ETD) filter which can also generate the integration of the function of signals is presented for RLS (denoted as ETD-RLS). States and inertia parameters are estimated simultaneously using EKF. The observability is analyzed. All proposed methods are illustrated by simulations and experiments.

16. An inverse modeling approach to estimate groundwater flow and transport model parameters at a research site at Vandenberg AFB, CA

Science.gov (United States)

Rasa, E.; Foglia, L.; Mackay, D. M.; Ginn, T. R.; Scow, K. M.

2009-12-01

A numerical groundwater fate and transport model was developed for analyses of data from field experiments evaluating the impacts of ethanol on the natural attenuation of benzene, toluene, ethylbenzene, and xylenes (BTEX) and methyl tert-butyl ether (MTBE) at Vandenberg Air Force Base, Site 60. We used the U.S. Geological Survey (USGS) groundwater flow (MODFLOW2000) and transport (MT3DMS) models in conjunction with the USGS universal inverse modeling code (UCODE) to jointly determine flow and transport parameters using bromide tracer data from multiple experiments in the same location. The key flow and transport parameters include hydraulic conductivity of aquifer and aquitard layers, porosity, and transverse and longitudinal dispersivity. Aquifer and aquitard layers were assumed homogenous in this study. Therefore, the calibration parameters were not spatially variable within each layer. A total of 162 monitoring wells in seven transects perpendicular to the mean flow direction were monitored over the course of ten months, resulting in 1,766 bromide concentration data points and 149 head values used as observations for the inverse modeling. The results showed the significance of the concentration observation data in predicting the flow model parameters and indicated the sensitivity of the hydraulic conductivity of different zones in the aquifer including the excavated former contaminant zone. The model has already been used to evaluate alternative designs for further experiments on in situ bioremediation of the tert-butyl alcohol (TBA) plume remaining at the site. We describe the recent applications of the model and future work, including adding reaction submodels to the calibrated flow model.

17. Estimates for the parameters of the heavy quark expansion

Energy Technology Data Exchange (ETDEWEB)

Heinonen, Johannes; Mannel, Thomas [Universitaet Siegen (Germany)

2015-07-01

We give improved estimates for the non-perturbative parameters appearing in the heavy quark expansion for inclusive decays. While the parameters appearing in low orders of this expansion can be extracted from data, the number of parameters in higher orders proliferates strongly, making a determination of these parameters from data impossible. Thus, one has to rely on theoretical estimates which may be obtained from an insertion of intermediate states. We refine this method and attempt to estimate the uncertainties of this approach.

18. Novel Method for 5G Systems NLOS Channels Parameter Estimation

Directory of Open Access Journals (Sweden)

2017-01-01

Full Text Available For the development of new 5G systems to operate in mm bands, there is a need for accurate radio propagation modelling at these bands. In this paper novel approach for NLOS channels parameter estimation will be presented. Estimation will be performed based on LCR performance measure, which will enable us to estimate propagation parameters in real time and to avoid weaknesses of ML and moment method estimation approaches.

19. Online State Space Model Parameter Estimation in Synchronous Machines

Directory of Open Access Journals (Sweden)

Z. Gallehdari

2014-06-01

The suggested approach is evaluated for a sample synchronous machine model. Estimated parameters are tested for different inputs at different operating conditions. The effect of noise is also considered in this study. Simulation results show that the proposed approach provides good accuracy for parameter estimation.

20. Parameter Estimation in Continuous Time Domain

Directory of Open Access Journals (Sweden)

Gabriela M. ATANASIU

2016-12-01

Full Text Available This paper will aim to presents the applications of a continuous-time parameter estimation method for estimating structural parameters of a real bridge structure. For the purpose of illustrating this method two case studies of a bridge pile located in a highly seismic risk area are considered, for which the structural parameters for the mass, damping and stiffness are estimated. The estimation process is followed by the validation of the analytical results and comparison with them to the measurement data. Further benefits and applications for the continuous-time parameter estimation method in civil engineering are presented in the final part of this paper.

1. Parameter estimation in tree graph metabolic networks

Directory of Open Access Journals (Sweden)

Laura Astola

2016-09-01

Full Text Available We study the glycosylation processes that convert initially toxic substrates to nutritionally valuable metabolites in the flavonoid biosynthesis pathway of tomato (Solanum lycopersicum seedlings. To estimate the reaction rates we use ordinary differential equations (ODEs to model the enzyme kinetics. A popular choice is to use a system of linear ODEs with constant kinetic rates or to use Michaelis–Menten kinetics. In reality, the catalytic rates, which are affected among other factors by kinetic constants and enzyme concentrations, are changing in time and with the approaches just mentioned, this phenomenon cannot be described. Another problem is that, in general these kinetic coefficients are not always identifiable. A third problem is that, it is not precisely known which enzymes are catalyzing the observed glycosylation processes. With several hundred potential gene candidates, experimental validation using purified target proteins is expensive and time consuming. We aim at reducing this task via mathematical modeling to allow for the pre-selection of most potential gene candidates. In this article we discuss a fast and relatively simple approach to estimate time varying kinetic rates, with three favorable properties: firstly, it allows for identifiable estimation of time dependent parameters in networks with a tree-like structure. Secondly, it is relatively fast compared to usually applied methods that estimate the model derivatives together with the network parameters. Thirdly, by combining the metabolite concentration data with a corresponding microarray data, it can help in detecting the genes related to the enzymatic processes. By comparing the estimated time dynamics of the catalytic rates with time series gene expression data we may assess potential candidate genes behind enzymatic reactions. As an example, we show how to apply this method to select prominent glycosyltransferase genes in tomato seedlings.

2. Parameter estimation in tree graph metabolic networks.

Science.gov (United States)

Astola, Laura; Stigter, Hans; Gomez Roldan, Maria Victoria; van Eeuwijk, Fred; Hall, Robert D; Groenenboom, Marian; Molenaar, Jaap J

2016-01-01

We study the glycosylation processes that convert initially toxic substrates to nutritionally valuable metabolites in the flavonoid biosynthesis pathway of tomato (Solanum lycopersicum) seedlings. To estimate the reaction rates we use ordinary differential equations (ODEs) to model the enzyme kinetics. A popular choice is to use a system of linear ODEs with constant kinetic rates or to use Michaelis-Menten kinetics. In reality, the catalytic rates, which are affected among other factors by kinetic constants and enzyme concentrations, are changing in time and with the approaches just mentioned, this phenomenon cannot be described. Another problem is that, in general these kinetic coefficients are not always identifiable. A third problem is that, it is not precisely known which enzymes are catalyzing the observed glycosylation processes. With several hundred potential gene candidates, experimental validation using purified target proteins is expensive and time consuming. We aim at reducing this task via mathematical modeling to allow for the pre-selection of most potential gene candidates. In this article we discuss a fast and relatively simple approach to estimate time varying kinetic rates, with three favorable properties: firstly, it allows for identifiable estimation of time dependent parameters in networks with a tree-like structure. Secondly, it is relatively fast compared to usually applied methods that estimate the model derivatives together with the network parameters. Thirdly, by combining the metabolite concentration data with a corresponding microarray data, it can help in detecting the genes related to the enzymatic processes. By comparing the estimated time dynamics of the catalytic rates with time series gene expression data we may assess potential candidate genes behind enzymatic reactions. As an example, we show how to apply this method to select prominent glycosyltransferase genes in tomato seedlings.

3. Statistics of Parameter Estimates: A Concrete Example

KAUST Repository

Aguilar, Oscar; Allmaras, Moritz; Bangerth, Wolfgang; Tenorio, Luis

2015-01-01

© 2015 Society for Industrial and Applied Mathematics. Most mathematical models include parameters that need to be determined from measurements. The estimated values of these parameters and their uncertainties depend on assumptions made about noise

4. Parameter Estimation of Partial Differential Equation Models

KAUST Repository

Xun, Xiaolei; Cao, Jiguo; Mallick, Bani; Maity, Arnab; Carroll, Raymond J.

2013-01-01

PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus

5. Usefulness and limitations of dK random graph models to predict interactions and functional homogeneity in biological networks under a pseudo-likelihood parameter estimation approach

Directory of Open Access Journals (Sweden)

Luan Yihui

2009-09-01

Full Text Available Abstract Background Many aspects of biological functions can be modeled by biological networks, such as protein interaction networks, metabolic networks, and gene coexpression networks. Studying the statistical properties of these networks in turn allows us to infer biological function. Complex statistical network models can potentially more accurately describe the networks, but it is not clear whether such complex models are better suited to find biologically meaningful subnetworks. Results Recent studies have shown that the degree distribution of the nodes is not an adequate statistic in many molecular networks. We sought to extend this statistic with 2nd and 3rd order degree correlations and developed a pseudo-likelihood approach to estimate the parameters. The approach was used to analyze the MIPS and BIOGRID yeast protein interaction networks, and two yeast coexpression networks. We showed that 2nd order degree correlation information gave better predictions of gene interactions in both protein interaction and gene coexpression networks. However, in the biologically important task of predicting functionally homogeneous modules, degree correlation information performs marginally better in the case of the MIPS and BIOGRID protein interaction networks, but worse in the case of gene coexpression networks. Conclusion Our use of dK models showed that incorporation of degree correlations could increase predictive power in some contexts, albeit sometimes marginally, but, in all contexts, the use of third-order degree correlations decreased accuracy. However, it is possible that other parameter estimation methods, such as maximum likelihood, will show the usefulness of incorporating 2nd and 3rd degree correlations in predicting functionally homogeneous modules.

6. Usefulness and limitations of dK random graph models to predict interactions and functional homogeneity in biological networks under a pseudo-likelihood parameter estimation approach.

Science.gov (United States)

Wang, Wenhui; Nunez-Iglesias, Juan; Luan, Yihui; Sun, Fengzhu

2009-09-03

Many aspects of biological functions can be modeled by biological networks, such as protein interaction networks, metabolic networks, and gene coexpression networks. Studying the statistical properties of these networks in turn allows us to infer biological function. Complex statistical network models can potentially more accurately describe the networks, but it is not clear whether such complex models are better suited to find biologically meaningful subnetworks. Recent studies have shown that the degree distribution of the nodes is not an adequate statistic in many molecular networks. We sought to extend this statistic with 2nd and 3rd order degree correlations and developed a pseudo-likelihood approach to estimate the parameters. The approach was used to analyze the MIPS and BIOGRID yeast protein interaction networks, and two yeast coexpression networks. We showed that 2nd order degree correlation information gave better predictions of gene interactions in both protein interaction and gene coexpression networks. However, in the biologically important task of predicting functionally homogeneous modules, degree correlation information performs marginally better in the case of the MIPS and BIOGRID protein interaction networks, but worse in the case of gene coexpression networks. Our use of dK models showed that incorporation of degree correlations could increase predictive power in some contexts, albeit sometimes marginally, but, in all contexts, the use of third-order degree correlations decreased accuracy. However, it is possible that other parameter estimation methods, such as maximum likelihood, will show the usefulness of incorporating 2nd and 3rd degree correlations in predicting functionally homogeneous modules.

7. On parameter estimation in deformable models

DEFF Research Database (Denmark)

Fisker, Rune; Carstensen, Jens Michael

1998-01-01

Deformable templates have been intensively studied in image analysis through the last decade, but despite its significance the estimation of model parameters has received little attention. We present a method for supervised and unsupervised model parameter estimation using a general Bayesian form...

8. Parameter estimation for lithium ion batteries

Science.gov (United States)

Santhanagopalan, Shriram

road conditions is important. An algorithm to predict the SOC in time intervals as small as 5 ms is of critical demand. In such cases, the conventional non-linear estimation procedure is not time-effective. There exist methodologies in the literature, such as those based on fuzzy logic; however, these techniques require a lot of computational storage space. Consequently, it is not possible to implement such techniques on a micro-chip for integration as a part of a real-time device. The Extended Kalman Filter (EKF) based approach presented in this work is a first step towards developing an efficient method to predict online, the State of Charge of a lithium ion cell based on an electrochemical model. The final part of the dissertation focuses on incorporating uncertainty in parameter values into electrochemical models using the polynomial chaos theory (PCT).

9. A new Bayesian recursive technique for parameter estimation

Science.gov (United States)

Kaheil, Yasir H.; Gill, M. Kashif; McKee, Mac; Bastidas, Luis

2006-08-01

The performance of any model depends on how well its associated parameters are estimated. In the current application, a localized Bayesian recursive estimation (LOBARE) approach is devised for parameter estimation. The LOBARE methodology is an extension of the Bayesian recursive estimation (BARE) method. It is applied in this paper on two different types of models: an artificial intelligence (AI) model in the form of a support vector machine (SVM) application for forecasting soil moisture and a conceptual rainfall-runoff (CRR) model represented by the Sacramento soil moisture accounting (SAC-SMA) model. Support vector machines, based on statistical learning theory (SLT), represent the modeling task as a quadratic optimization problem and have already been used in various applications in hydrology. They require estimation of three parameters. SAC-SMA is a very well known model that estimates runoff. It has a 13-dimensional parameter space. In the LOBARE approach presented here, Bayesian inference is used in an iterative fashion to estimate the parameter space that will most likely enclose a best parameter set. This is done by narrowing the sampling space through updating the "parent" bounds based on their fitness. These bounds are actually the parameter sets that were selected by BARE runs on subspaces of the initial parameter space. The new approach results in faster convergence toward the optimal parameter set using minimum training/calibration data and fewer sets of parameter values. The efficacy of the localized methodology is also compared with the previously used BARE algorithm.

10. ESTIMATION ACCURACY OF EXPONENTIAL DISTRIBUTION PARAMETERS

Directory of Open Access Journals (Sweden)

2011-04-01

Full Text Available The exponential distribution is commonly used to model the behavior of units that have a constant failure rate. The two-parameter exponential distribution provides a simple but nevertheless useful model for the analysis of lifetimes, especially when investigating reliability of technical equipment.This paper is concerned with estimation of parameters of the two parameter (location and scale exponential distribution. We used the least squares method (LSM, relative least squares method (RELS, ridge regression method (RR,  moment estimators (ME, modified moment estimators (MME, maximum likelihood estimators (MLE and modified maximum likelihood estimators (MMLE. We used the mean square error MSE, and total deviation TD, as measurement for the comparison between these methods. We determined the best method for estimation using different values for the parameters and different sample sizes

11. Cosmological parameter estimation using Particle Swarm Optimization

Science.gov (United States)

2014-03-01

Constraining parameters of a theoretical model from observational data is an important exercise in cosmology. There are many theoretically motivated models, which demand greater number of cosmological parameters than the standard model of cosmology uses, and make the problem of parameter estimation challenging. It is a common practice to employ Bayesian formalism for parameter estimation for which, in general, likelihood surface is probed. For the standard cosmological model with six parameters, likelihood surface is quite smooth and does not have local maxima, and sampling based methods like Markov Chain Monte Carlo (MCMC) method are quite successful. However, when there are a large number of parameters or the likelihood surface is not smooth, other methods may be more effective. In this paper, we have demonstrated application of another method inspired from artificial intelligence, called Particle Swarm Optimization (PSO) for estimating cosmological parameters from Cosmic Microwave Background (CMB) data taken from the WMAP satellite.

12. Cosmological parameter estimation using Particle Swarm Optimization

International Nuclear Information System (INIS)

2014-01-01

Constraining parameters of a theoretical model from observational data is an important exercise in cosmology. There are many theoretically motivated models, which demand greater number of cosmological parameters than the standard model of cosmology uses, and make the problem of parameter estimation challenging. It is a common practice to employ Bayesian formalism for parameter estimation for which, in general, likelihood surface is probed. For the standard cosmological model with six parameters, likelihood surface is quite smooth and does not have local maxima, and sampling based methods like Markov Chain Monte Carlo (MCMC) method are quite successful. However, when there are a large number of parameters or the likelihood surface is not smooth, other methods may be more effective. In this paper, we have demonstrated application of another method inspired from artificial intelligence, called Particle Swarm Optimization (PSO) for estimating cosmological parameters from Cosmic Microwave Background (CMB) data taken from the WMAP satellite

13. A Comparative Study of Distribution System Parameter Estimation Methods

Energy Technology Data Exchange (ETDEWEB)

Sun, Yannan; Williams, Tess L.; Gourisetti, Sri Nikhil Gup

2016-07-17

In this paper, we compare two parameter estimation methods for distribution systems: residual sensitivity analysis and state-vector augmentation with a Kalman filter. These two methods were originally proposed for transmission systems, and are still the most commonly used methods for parameter estimation. Distribution systems have much lower measurement redundancy than transmission systems. Therefore, estimating parameters is much more difficult. To increase the robustness of parameter estimation, the two methods are applied with combined measurement snapshots (measurement sets taken at different points in time), so that the redundancy for computing the parameter values is increased. The advantages and disadvantages of both methods are discussed. The results of this paper show that state-vector augmentation is a better approach for parameter estimation in distribution systems. Simulation studies are done on a modified version of IEEE 13-Node Test Feeder with varying levels of measurement noise and non-zero error in the other system model parameters.

14. Parameter Estimation for a Computable General Equilibrium Model

DEFF Research Database (Denmark)

Arndt, Channing; Robinson, Sherman; Tarp, Finn

2002-01-01

We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of non-linear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...

15. Parameter Estimation for a Computable General Equilibrium Model

DEFF Research Database (Denmark)

Arndt, Channing; Robinson, Sherman; Tarp, Finn

We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of nonlinear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...

16. Composite likelihood estimation of demographic parameters

Directory of Open Access Journals (Sweden)

Garrigan Daniel

2009-11-01

Full Text Available Abstract Background Most existing likelihood-based methods for fitting historical demographic models to DNA sequence polymorphism data to do not scale feasibly up to the level of whole-genome data sets. Computational economies can be achieved by incorporating two forms of pseudo-likelihood: composite and approximate likelihood methods. Composite likelihood enables scaling up to large data sets because it takes the product of marginal likelihoods as an estimator of the likelihood of the complete data set. This approach is especially useful when a large number of genomic regions constitutes the data set. Additionally, approximate likelihood methods can reduce the dimensionality of the data by summarizing the information in the original data by either a sufficient statistic, or a set of statistics. Both composite and approximate likelihood methods hold promise for analyzing large data sets or for use in situations where the underlying demographic model is complex and has many parameters. This paper considers a simple demographic model of allopatric divergence between two populations, in which one of the population is hypothesized to have experienced a founder event, or population bottleneck. A large resequencing data set from human populations is summarized by the joint frequency spectrum, which is a matrix of the genomic frequency spectrum of derived base frequencies in two populations. A Bayesian Metropolis-coupled Markov chain Monte Carlo (MCMCMC method for parameter estimation is developed that uses both composite and likelihood methods and is applied to the three different pairwise combinations of the human population resequence data. The accuracy of the method is also tested on data sets sampled from a simulated population model with known parameters. Results The Bayesian MCMCMC method also estimates the ratio of effective population size for the X chromosome versus that of the autosomes. The method is shown to estimate, with reasonable

17. Parameter Estimation of Nonlinear Models in Forestry.

OpenAIRE

Fekedulegn, Desta; Mac Siúrtáin, Máirtín Pádraig; Colbert, Jim J.

1999-01-01

Partial derivatives of the negative exponential, monomolecular, Mitcherlich, Gompertz, logistic, Chapman-Richards, von Bertalanffy, Weibull and the Richard’s nonlinear growth models are presented. The application of these partial derivatives in estimating the model parameters is illustrated. The parameters are estimated using the Marquardt iterative method of nonlinear regression relating top height to age of Norway spruce (Picea abies L.) from the Bowmont Norway Spruce Thinnin...

18. Reionization history and CMB parameter estimation

International Nuclear Information System (INIS)

2013-01-01

We study how uncertainty in the reionization history of the universe affects estimates of other cosmological parameters from the Cosmic Microwave Background. We analyze WMAP7 data and synthetic Planck-quality data generated using a realistic scenario for the reionization history of the universe obtained from high-resolution numerical simulation. We perform parameter estimation using a simple sudden reionization approximation, and using the Principal Component Analysis (PCA) technique proposed by Mortonson and Hu. We reach two main conclusions: (1) Adopting a simple sudden reionization model does not introduce measurable bias into values for other parameters, indicating that detailed modeling of reionization is not necessary for the purpose of parameter estimation from future CMB data sets such as Planck. (2) PCA analysis does not allow accurate reconstruction of the actual reionization history of the universe in a realistic case

19. Reionization history and CMB parameter estimation

Energy Technology Data Exchange (ETDEWEB)

2013-05-01

We study how uncertainty in the reionization history of the universe affects estimates of other cosmological parameters from the Cosmic Microwave Background. We analyze WMAP7 data and synthetic Planck-quality data generated using a realistic scenario for the reionization history of the universe obtained from high-resolution numerical simulation. We perform parameter estimation using a simple sudden reionization approximation, and using the Principal Component Analysis (PCA) technique proposed by Mortonson and Hu. We reach two main conclusions: (1) Adopting a simple sudden reionization model does not introduce measurable bias into values for other parameters, indicating that detailed modeling of reionization is not necessary for the purpose of parameter estimation from future CMB data sets such as Planck. (2) PCA analysis does not allow accurate reconstruction of the actual reionization history of the universe in a realistic case.

20. Statistical analysis of seismicity and hazard estimation for Italy (mixed approach). Statistical parameters of main shocks and aftershocks in the Italian region

International Nuclear Information System (INIS)

Molchan, G.M.; Kronrod, T.L.; Dmitrieva, O.E.

1995-03-01

The catalog of earthquakes of Italy (1900-1993) is analyzed in the present work. The following problems have been considered: 1) a choice of the operating magnitude, 2) an analysis of data completeness, and 3) a grouping (in time and in space). The catalog has been separated into main shocks and aftershocks. Statistical estimations of seismicity parameters (a,b) are performed for the seismogenetic zones defined by GNDT. The non-standard elements of the analysis performed are: (a) statistical estimation and comparison of seismicity parameters under the condition of arbitrary data grouping in magnitude, time and space; (b) use of a not conventional statistical method for the aftershock identification; the method is based on the idea of optimizing two kinds of errors in the aftershock identification process; (c) use of the aftershock zones to reveal seismically- interrelated seismogenic zones. This procedure contributes to the stability of the estimation of the ''b-value'' Refs, 25 figs, tabs

1. Statistics of Parameter Estimates: A Concrete Example

KAUST Repository

Aguilar, Oscar

2015-01-01

© 2015 Society for Industrial and Applied Mathematics. Most mathematical models include parameters that need to be determined from measurements. The estimated values of these parameters and their uncertainties depend on assumptions made about noise levels, models, or prior knowledge. But what can we say about the validity of such estimates, and the influence of these assumptions? This paper is concerned with methods to address these questions, and for didactic purposes it is written in the context of a concrete nonlinear parameter estimation problem. We will use the results of a physical experiment conducted by Allmaras et al. at Texas A&M University [M. Allmaras et al., SIAM Rev., 55 (2013), pp. 149-167] to illustrate the importance of validation procedures for statistical parameter estimation. We describe statistical methods and data analysis tools to check the choices of likelihood and prior distributions, and provide examples of how to compare Bayesian results with those obtained by non-Bayesian methods based on different types of assumptions. We explain how different statistical methods can be used in complementary ways to improve the understanding of parameter estimates and their uncertainties.

2. Aircraft parameter estimation ± A tool for development of ...

In addition, actuator performance and controller gains may be flight condition dependent. Moreover, this approach may result in open-loop parameter estimates with low accuracy. 6. Aerodynamic databases for high fidelity flight simulators. Estimation of a comprehensive aerodynamic model suitable for a flight simulator is an.

3. Application of spreadsheet to estimate infiltration parameters

OpenAIRE

2016-01-01

Infiltration is the process of flow of water into the ground through the soil surface. Soil water although contributes a negligible fraction of total water present on earth surface, but is of utmost importance for plant life. Estimation of infiltration rates is of paramount importance for estimation of effective rainfall, groundwater recharge, and designing of irrigation systems. Numerous infiltration models are in use for estimation of infiltration rates. The conventional graphical approach ...

4. Parameter estimation in X-ray astronomy

International Nuclear Information System (INIS)

Lampton, M.; Margon, B.; Bowyer, S.

1976-01-01

The problems of model classification and parameter estimation are examined, with the objective of establishing the statistical reliability of inferences drawn from X-ray observations. For testing the validities of classes of models, the procedure based on minimizing the chi 2 statistic is recommended; it provides a rejection criterion at any desired significance level. Once a class of models has been accepted, a related procedure based on the increase of chi 2 gives a confidence region for the values of the model's adjustable parameters. The procedure allows the confidence level to be chosen exactly, even for highly nonlinear models. Numerical experiments confirm the validity of the prescribed technique.The chi 2 /sub min/+1 error estimation method is evaluated and found unsuitable when several parameter ranges are to be derived, because it substantially underestimates their joint errors. The ratio of variances method, while formally correct, gives parameter confidence regions which are more variable than necessary

5. Parameter Estimation for Thurstone Choice Models

Energy Technology Data Exchange (ETDEWEB)

Vojnovic, Milan [London School of Economics (United Kingdom); Yun, Seyoung [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

2017-04-24

We consider the estimation accuracy of individual strength parameters of a Thurstone choice model when each input observation consists of a choice of one item from a set of two or more items (so called top-1 lists). This model accommodates the well-known choice models such as the Luce choice model for comparison sets of two or more items and the Bradley-Terry model for pair comparisons. We provide a tight characterization of the mean squared error of the maximum likelihood parameter estimator. We also provide similar characterizations for parameter estimators defined by a rank-breaking method, which amounts to deducing one or more pair comparisons from a comparison of two or more items, assuming independence of these pair comparisons, and maximizing a likelihood function derived under these assumptions. We also consider a related binary classification problem where each individual parameter takes value from a set of two possible values and the goal is to correctly classify all items within a prescribed classification error. The results of this paper shed light on how the parameter estimation accuracy depends on given Thurstone choice model and the structure of comparison sets. In particular, we found that for unbiased input comparison sets of a given cardinality, when in expectation each comparison set of given cardinality occurs the same number of times, for a broad class of Thurstone choice models, the mean squared error decreases with the cardinality of comparison sets, but only marginally according to a diminishing returns relation. On the other hand, we found that there exist Thurstone choice models for which the mean squared error of the maximum likelihood parameter estimator can decrease much faster with the cardinality of comparison sets. We report empirical evaluation of some claims and key parameters revealed by theory using both synthetic and real-world input data from some popular sport competitions and online labor platforms.

6. Bayesian estimation of Weibull distribution parameters

International Nuclear Information System (INIS)

Bacha, M.; Celeux, G.; Idee, E.; Lannoy, A.; Vasseur, D.

1994-11-01

In this paper, we expose SEM (Stochastic Expectation Maximization) and WLB-SIR (Weighted Likelihood Bootstrap - Sampling Importance Re-sampling) methods which are used to estimate Weibull distribution parameters when data are very censored. The second method is based on Bayesian inference and allow to take into account available prior informations on parameters. An application of this method, with real data provided by nuclear power plants operation feedback analysis has been realized. (authors). 8 refs., 2 figs., 2 tabs

7. Iterative importance sampling algorithms for parameter estimation

OpenAIRE

Morzfeld, Matthias; Day, Marcus S.; Grout, Ray W.; Pau, George Shu Heng; Finsterle, Stefan A.; Bell, John B.

2016-01-01

In parameter estimation problems one computes a posterior distribution over uncertain parameters defined jointly by a prior distribution, a model, and noisy data. Markov Chain Monte Carlo (MCMC) is often used for the numerical solution of such problems. An alternative to MCMC is importance sampling, which can exhibit near perfect scaling with the number of cores on high performance computing systems because samples are drawn independently. However, finding a suitable proposal distribution is ...

8. Parameter estimation for an expanding universe

Directory of Open Access Journals (Sweden)

Jieci Wang

2015-03-01

Full Text Available We study the parameter estimation for excitations of Dirac fields in the expanding Robertson–Walker universe. We employ quantum metrology techniques to demonstrate the possibility for high precision estimation for the volume rate of the expanding universe. We show that the optimal precision of the estimation depends sensitively on the dimensionless mass m˜ and dimensionless momentum k˜ of the Dirac particles. The optimal precision for the ratio estimation peaks at some finite dimensionless mass m˜ and momentum k˜. We find that the precision of the estimation can be improved by choosing the probe state as an eigenvector of the hamiltonian. This occurs because the largest quantum Fisher information is obtained by performing projective measurements implemented by the projectors onto the eigenvectors of specific probe states.

9. Nonparametric estimation of location and scale parameters

KAUST Repository

Potgieter, C.J.; Lombard, F.

2012-01-01

Two random variables X and Y belong to the same location-scale family if there are constants μ and σ such that Y and μ+σX have the same distribution. In this paper we consider non-parametric estimation of the parameters μ and σ under minimal

10. Postprocessing MPEG based on estimated quantization parameters

DEFF Research Database (Denmark)

Forchhammer, Søren

2009-01-01

the case where the coded stream is not accessible, or from an architectural point of view not desirable to use, and instead estimate some of the MPEG stream parameters based on the decoded sequence. The I-frames are detected and the quantization parameters are estimated from the coded stream and used...... in the postprocessing. We focus on deringing and present a scheme which aims at suppressing ringing artifacts, while maintaining the sharpness of the texture. The goal is to improve the visual quality, so perceptual blur and ringing metrics are used in addition to PSNR evaluation. The performance of the new pure......' postprocessing compares favorable to a reference postprocessing filter which has access to the quantization parameters not only for I-frames but also on P and B-frames....

11. Impacts of Different Types of Measurements on Estimating Unsaturatedflow Parameters

Science.gov (United States)

Shi, L.

2015-12-01

This study evaluates the value of different types of measurements for estimating soil hydraulic parameters. A numerical method based on ensemble Kalman filter (EnKF) is presented to solely or jointly assimilate point-scale soil water head data, point-scale soil water content data, surface soil water content data and groundwater level data. This study investigates the performance of EnKF under different types of data, the potential worth contained in these data, and the factors that may affect estimation accuracy. Results show that for all types of data, smaller measurements errors lead to faster convergence to the true values. Higher accuracy measurements are required to improve the parameter estimation if a large number of unknown parameters need to be identified simultaneously. The data worth implied by the surface soil water content data and groundwater level data is prone to corruption by a deviated initial guess. Surface soil moisture data are capable of identifying soil hydraulic parameters for the top layers, but exert less or no influence on deeper layers especially when estimating multiple parameters simultaneously. Groundwater level is one type of valuable information to infer the soil hydraulic parameters. However, based on the approach used in this study, the estimates from groundwater level data may suffer severe degradation if a large number of parameters must be identified. Combined use of two or more types of data is helpful to improve the parameter estimation.

12. Parameter estimation in stochastic differential equations

CERN Document Server

Bishwal, Jaya P N

2008-01-01

Parameter estimation in stochastic differential equations and stochastic partial differential equations is the science, art and technology of modelling complex phenomena and making beautiful decisions. The subject has attracted researchers from several areas of mathematics and other related fields like economics and finance. This volume presents the estimation of the unknown parameters in the corresponding continuous models based on continuous and discrete observations and examines extensively maximum likelihood, minimum contrast and Bayesian methods. Useful because of the current availability of high frequency data is the study of refined asymptotic properties of several estimators when the observation time length is large and the observation time interval is small. Also space time white noise driven models, useful for spatial data, and more sophisticated non-Markovian and non-semimartingale models like fractional diffusions that model the long memory phenomena are examined in this volume.

13. Estimates of Water-Column Nutrient Concentrations and Carbonate System Parameters in the Global Ocean: A Novel Approach Based on Neural Networks

Directory of Open Access Journals (Sweden)

Raphaëlle Sauzède

2017-05-01

Full Text Available A neural network-based method (CANYON: CArbonate system and Nutrients concentration from hYdrological properties and Oxygen using a Neural-network was developed to estimate water-column (i.e., from surface to 8,000 m depth biogeochemically relevant variables in the Global Ocean. These are the concentrations of three nutrients [nitrate (NO3−, phosphate (PO43−, and silicate (Si(OH4] and four carbonate system parameters [total alkalinity (AT, dissolved inorganic carbon (CT, pH (pHT, and partial pressure of CO2 (pCO2], which are estimated from concurrent in situ measurements of temperature, salinity, hydrostatic pressure, and oxygen (O2 together with sampling latitude, longitude, and date. Seven neural-networks were developed using the GLODAPv2 database, which is largely representative of the diversity of open-ocean conditions, hence making CANYON potentially applicable to most oceanic environments. For each variable, CANYON was trained using 80 % randomly chosen data from the whole database (after eight 10° × 10° zones removed providing an “independent data-set” for additional validation, the remaining 20 % data were used for the neural-network test of validation. Overall, CANYON retrieved the variables with high accuracies (RMSE: 1.04 μmol kg−1 (NO3−, 0.074 μmol kg−1 (PO43−, 3.2 μmol kg−1 (Si(OH4, 0.020 (pHT, 9 μmol kg−1 (AT, 11 μmol kg−1 (CT and 7.6 % (pCO2 (30 μatm at 400 μatm. This was confirmed for the eight independent zones not included in the training process. CANYON was also applied to the Hawaiian Time Series site to produce a 22 years long simulated time series for the above seven variables. Comparison of modeled and measured data was also very satisfactory (RMSE in the order of magnitude of RMSE from validation test. CANYON is thus a promising method to derive distributions of key biogeochemical variables. It could be used for a variety of global and regional applications ranging from data quality control

14. A software for parameter estimation in dynamic models

Directory of Open Access Journals (Sweden)

M. Yuceer

2008-12-01

Full Text Available A common problem in dynamic systems is to determine parameters in an equation used to represent experimental data. The goal is to determine the values of model parameters that provide the best fit to measured data, generally based on some type of least squares or maximum likelihood criterion. In the most general case, this requires the solution of a nonlinear and frequently non-convex optimization problem. Some of the available software lack in generality, while others do not provide ease of use. A user-interactive parameter estimation software was needed for identifying kinetic parameters. In this work we developed an integration based optimization approach to provide a solution to such problems. For easy implementation of the technique, a parameter estimation software (PARES has been developed in MATLAB environment. When tested with extensive example problems from literature, the suggested approach is proven to provide good agreement between predicted and observed data within relatively less computing time and iterations.

15. Short-Term Wind Speed Forecasting Using the Data Processing Approach and the Support Vector Machine Model Optimized by the Improved Cuckoo Search Parameter Estimation Algorithm

Directory of Open Access Journals (Sweden)

Chen Wang

2016-01-01

Full Text Available Power systems could be at risk when the power-grid collapse accident occurs. As a clean and renewable resource, wind energy plays an increasingly vital role in reducing air pollution and wind power generation becomes an important way to produce electrical power. Therefore, accurate wind power and wind speed forecasting are in need. In this research, a novel short-term wind speed forecasting portfolio has been proposed using the following three procedures: (I data preprocessing: apart from the regular normalization preprocessing, the data are preprocessed through empirical model decomposition (EMD, which reduces the effect of noise on the wind speed data; (II artificially intelligent parameter optimization introduction: the unknown parameters in the support vector machine (SVM model are optimized by the cuckoo search (CS algorithm; (III parameter optimization approach modification: an improved parameter optimization approach, called the SDCS model, based on the CS algorithm and the steepest descent (SD method is proposed. The comparison results show that the simple and effective portfolio EMD-SDCS-SVM produces promising predictions and has better performance than the individual forecasting components, with very small root mean squared errors and mean absolute percentage errors.

16. Adaptive distributed parameter and input estimation in linear parabolic PDEs

KAUST Repository

Mechhoud, Sarra

2016-01-01

First, new sufficient identifiability conditions of the input and the parameter simultaneous estimation are stated. Then, by means of Lyapunov-based design, an adaptive estimator is derived in the infinite-dimensional framework. It consists of a state observer and gradient-based parameter and input adaptation laws. The parameter convergence depends on the plant signal richness assumption, whereas the state convergence is established using a Lyapunov approach. The results of the paper are illustrated by simulation on tokamak plasma heat transport model using simulated data.

17. Cosmological parameter estimation using particle swarm optimization

Science.gov (United States)

2012-06-01

Constraining theoretical models, which are represented by a set of parameters, using observational data is an important exercise in cosmology. In Bayesian framework this is done by finding the probability distribution of parameters which best fits to the observational data using sampling based methods like Markov chain Monte Carlo (MCMC). It has been argued that MCMC may not be the best option in certain problems in which the target function (likelihood) poses local maxima or have very high dimensionality. Apart from this, there may be examples in which we are mainly interested to find the point in the parameter space at which the probability distribution has the largest value. In this situation the problem of parameter estimation becomes an optimization problem. In the present work we show that particle swarm optimization (PSO), which is an artificial intelligence inspired population based search procedure, can also be used for cosmological parameter estimation. Using PSO we were able to recover the best-fit Λ cold dark matter (LCDM) model parameters from the WMAP seven year data without using any prior guess value or any other property of the probability distribution of parameters like standard deviation, as is common in MCMC. We also report the results of an exercise in which we consider a binned primordial power spectrum (to increase the dimensionality of problem) and find that a power spectrum with features gives lower chi square than the standard power law. Since PSO does not sample the likelihood surface in a fair way, we follow a fitting procedure to find the spread of likelihood function around the best-fit point.

18. Optimal design criteria - prediction vs. parameter estimation

Science.gov (United States)

Waldl, Helmut

2014-05-01

G-optimality is a popular design criterion for optimal prediction, it tries to minimize the kriging variance over the whole design region. A G-optimal design minimizes the maximum variance of all predicted values. If we use kriging methods for prediction it is self-evident to use the kriging variance as a measure of uncertainty for the estimates. Though the computation of the kriging variance and even more the computation of the empirical kriging variance is computationally very costly and finding the maximum kriging variance in high-dimensional regions can be time demanding such that we cannot really find the G-optimal design with nowadays available computer equipment in practice. We cannot always avoid this problem by using space-filling designs because small designs that minimize the empirical kriging variance are often non-space-filling. D-optimality is the design criterion related to parameter estimation. A D-optimal design maximizes the determinant of the information matrix of the estimates. D-optimality in terms of trend parameter estimation and D-optimality in terms of covariance parameter estimation yield basically different designs. The Pareto frontier of these two competing determinant criteria corresponds with designs that perform well under both criteria. Under certain conditions searching the G-optimal design on the above Pareto frontier yields almost as good results as searching the G-optimal design in the whole design region. In doing so the maximum of the empirical kriging variance has to be computed only a few times though. The method is demonstrated by means of a computer simulation experiment based on data provided by the Belgian institute Management Unit of the North Sea Mathematical Models (MUMM) that describe the evolution of inorganic and organic carbon and nutrients, phytoplankton, bacteria and zooplankton in the Southern Bight of the North Sea.

19. Variational estimates of point-kinetics parameters

International Nuclear Information System (INIS)

Favorite, J.A.; Stacey, W.M. Jr.

1995-01-01

Variational estimates of the effect of flux shifts on the integral reactivity parameter of the point-kinetics equations and on regional power fractions were calculated for a variety of localized perturbations in two light water reactor (LWR) model problems representing a small, tightly coupled core and a large, loosely coupled core. For the small core, the flux shifts resulting from even relatively large localized reactivity changes (∼600 pcm) were small, and the standard point-kinetics approximation estimates of reactivity were in error by only ∼10% or less, while the variational estimates were accurate to within ∼1%. For the larger core, significant (>50%) flux shifts occurred in response to local perturbations, leading to errors of the same magnitude in the standard point-kinetics approximation of the reactivity worth. For positive reactivity, the error in the variational estimate of reactivity was only a few percent in the larger core, and the resulting transient power prediction was 1 to 2 orders of magnitude more accurate than with the standard point-kinetics approximation. For a large, local negative reactivity insertion resulting in a large flux shift, the accuracy of the variational estimate broke down. The variational estimate of the effect of flux shifts on reactivity in point-kinetics calculations of transients in LWR cores was found to generally result in greatly improved accuracy, relative to the standard point-kinetics approximation, the exception being for large negative reactivity insertions with large flux shifts in large, loosely coupled cores

20. Approaches in highly parameterized inversion—PEST++ Version 3, a Parameter ESTimation and uncertainty analysis software suite optimized for large environmental models

Science.gov (United States)

Welter, David E.; White, Jeremy T.; Hunt, Randall J.; Doherty, John E.

2015-09-18

The PEST++ Version 1 object-oriented parameter estimation code is here extended to Version 3 to incorporate additional algorithms and tools to further improve support for large and complex environmental modeling problems. PEST++ Version 3 includes the Gauss-Marquardt-Levenberg (GML) algorithm for nonlinear parameter estimation, Tikhonov regularization, integrated linear-based uncertainty quantification, options of integrated TCP/IP based parallel run management or external independent run management by use of a Version 2 update of the GENIE Version 1 software code, and utilities for global sensitivity analyses. The Version 3 code design is consistent with PEST++ Version 1 and continues to be designed to lower the barriers of entry for users as well as developers while providing efficient and optimized algorithms capable of accommodating large, highly parameterized inverse problems. As such, this effort continues the original focus of (1) implementing the most popular and powerful features of the PEST software suite in a fashion that is easy for novice or experienced modelers to use and (2) developing a software framework that is easy to extend.

1. PARAMETER ESTIMATION IN BREAD BAKING MODEL

Directory of Open Access Journals (Sweden)

2012-05-01

Full Text Available Bread product quality is highly dependent to the baking process. A model for the development of product quality, which was obtained by using quantitative and qualitative relationships, was calibrated by experiments at a fixed baking temperature of 200°C alone and in combination with 100 W microwave powers. The model parameters were estimated in a stepwise procedure i.e. first, heat and mass transfer related parameters, then the parameters related to product transformations and finally product quality parameters. There was a fair agreement between the calibrated model results and the experimental data. The results showed that the applied simple qualitative relationships for quality performed above expectation. Furthermore, it was confirmed that the microwave input is most meaningful for the internal product properties and not for the surface properties as crispness and color. The model with adjusted parameters was applied in a quality driven food process design procedure to derive a dynamic operation pattern, which was subsequently tested experimentally to calibrate the model. Despite the limited calibration with fixed operation settings, the model predicted well on the behavior under dynamic convective operation and on combined convective and microwave operation. It was expected that the suitability between model and baking system could be improved further by performing calibration experiments at higher temperature and various microwave power levels.  Abstrak  PERKIRAAN PARAMETER DALAM MODEL UNTUK PROSES BAKING ROTI. Kualitas produk roti sangat tergantung pada proses baking yang digunakan. Suatu model yang telah dikembangkan dengan metode kualitatif dan kuantitaif telah dikalibrasi dengan percobaan pada temperatur 200oC dan dengan kombinasi dengan mikrowave pada 100 Watt. Parameter-parameter model diestimasi dengan prosedur bertahap yaitu pertama, parameter pada model perpindahan masa dan panas, parameter pada model transformasi, dan

2. On robust parameter estimation in brain-computer interfacing

Science.gov (United States)

Samek, Wojciech; Nakajima, Shinichi; Kawanabe, Motoaki; Müller, Klaus-Robert

2017-12-01

Objective. The reliable estimation of parameters such as mean or covariance matrix from noisy and high-dimensional observations is a prerequisite for successful application of signal processing and machine learning algorithms in brain-computer interfacing (BCI). This challenging task becomes significantly more difficult if the data set contains outliers, e.g. due to subject movements, eye blinks or loose electrodes, as they may heavily bias the estimation and the subsequent statistical analysis. Although various robust estimators have been developed to tackle the outlier problem, they ignore important structural information in the data and thus may not be optimal. Typical structural elements in BCI data are the trials consisting of a few hundred EEG samples and indicating the start and end of a task. Approach. This work discusses the parameter estimation problem in BCI and introduces a novel hierarchical view on robustness which naturally comprises different types of outlierness occurring in structured data. Furthermore, the class of minimum divergence estimators is reviewed and a robust mean and covariance estimator for structured data is derived and evaluated with simulations and on a benchmark data set. Main results. The results show that state-of-the-art BCI algorithms benefit from robustly estimated parameters. Significance. Since parameter estimation is an integral part of various machine learning algorithms, the presented techniques are applicable to many problems beyond BCI.

3. A Sparse Bayesian Learning Algorithm With Dictionary Parameter Estimation

DEFF Research Database (Denmark)

Hansen, Thomas Lundgaard; Badiu, Mihai Alin; Fleury, Bernard Henri

2014-01-01

This paper concerns sparse decomposition of a noisy signal into atoms which are specified by unknown continuous-valued parameters. An example could be estimation of the model order, frequencies and amplitudes of a superposition of complex sinusoids. The common approach is to reduce the continuous...

4. Parameter Estimates in Differential Equation Models for Population Growth

Science.gov (United States)

Winkel, Brian J.

2011-01-01

We estimate the parameters present in several differential equation models of population growth, specifically logistic growth models and two-species competition models. We discuss student-evolved strategies and offer "Mathematica" code for a gradient search approach. We use historical (1930s) data from microbial studies of the Russian biologist,…

5. Parameter extraction and estimation based on the PV panel outdoor ...

African Journals Online (AJOL)

The experimental data obtained are validated and compared with the estimated results obtained through simulation based on the manufacture's data sheet. The simulation is based on the Newton-Raphson iterative method in MATLAB environment. This approach aids the computation of the PV module's parameters at any ...

6. NONLINEAR PLANT PIECEWISE-CONTINUOUS MODEL MATRIX PARAMETERS ESTIMATION

Directory of Open Access Journals (Sweden)

Roman L. Leibov

2017-09-01

Full Text Available This paper presents a nonlinear plant piecewise-continuous model matrix parameters estimation technique using nonlinear model time responses and random search method. One of piecewise-continuous model application areas is defined. The results of proposed approach application for aircraft turbofan engine piecewisecontinuous model formation are presented

7. On Modal Parameter Estimates from Ambient Vibration Tests

DEFF Research Database (Denmark)

Agneni, A.; Brincker, Rune; Coppotelli, B.

2004-01-01

Modal parameter estimates from ambient vibration testing are turning into the preferred technique when one is interested in systems under actual loadings and operational conditions. Moreover, with this approach, expensive devices to excite the structure are not needed, since it can be adequately...

8. Accelerated maximum likelihood parameter estimation for stochastic biochemical systems

Directory of Open Access Journals (Sweden)

Daigle Bernie J

2012-05-01

Full Text Available Abstract Background A prerequisite for the mechanistic simulation of a biochemical system is detailed knowledge of its kinetic parameters. Despite recent experimental advances, the estimation of unknown parameter values from observed data is still a bottleneck for obtaining accurate simulation results. Many methods exist for parameter estimation in deterministic biochemical systems; methods for discrete stochastic systems are less well developed. Given the probabilistic nature of stochastic biochemical models, a natural approach is to choose parameter values that maximize the probability of the observed data with respect to the unknown parameters, a.k.a. the maximum likelihood parameter estimates (MLEs. MLE computation for all but the simplest models requires the simulation of many system trajectories that are consistent with experimental data. For models with unknown parameters, this presents a computational challenge, as the generation of consistent trajectories can be an extremely rare occurrence. Results We have developed Monte Carlo Expectation-Maximization with Modified Cross-Entropy Method (MCEM2: an accelerated method for calculating MLEs that combines advances in rare event simulation with a computationally efficient version of the Monte Carlo expectation-maximization (MCEM algorithm. Our method requires no prior knowledge regarding parameter values, and it automatically provides a multivariate parameter uncertainty estimate. We applied the method to five stochastic systems of increasing complexity, progressing from an analytically tractable pure-birth model to a computationally demanding model of yeast-polarization. Our results demonstrate that MCEM2 substantially accelerates MLE computation on all tested models when compared to a stand-alone version of MCEM. Additionally, we show how our method identifies parameter values for certain classes of models more accurately than two recently proposed computationally efficient methods

9. Estimation of parameter sensitivities for stochastic reaction networks

KAUST Repository

Gupta, Ankit

2016-01-07

Quantification of the effects of parameter uncertainty is an important and challenging problem in Systems Biology. We consider this problem in the context of stochastic models of biochemical reaction networks where the dynamics is described as a continuous-time Markov chain whose states represent the molecular counts of various species. For such models, effects of parameter uncertainty are often quantified by estimating the infinitesimal sensitivities of some observables with respect to model parameters. The aim of this talk is to present a holistic approach towards this problem of estimating parameter sensitivities for stochastic reaction networks. Our approach is based on a generic formula which allows us to construct efficient estimators for parameter sensitivity using simulations of the underlying model. We will discuss how novel simulation techniques, such as tau-leaping approximations, multi-level methods etc. can be easily integrated with our approach and how one can deal with stiff reaction networks where reactions span multiple time-scales. We will demonstrate the efficiency and applicability of our approach using many examples from the biological literature.

10. Preliminary Estimation of Kappa Parameter in Croatia

Science.gov (United States)

Stanko, Davor; Markušić, Snježana; Ivančić, Ines; Mario, Gazdek; Gülerce, Zeynep

2017-12-01

Spectral parameter kappa κ is used to describe spectral amplitude decay “crash syndrome” at high frequencies. The purpose of this research is to estimate spectral parameter kappa for the first time in Croatia based on small and moderate earthquakes. Recordings of local earthquakes with magnitudes higher than 3, epicentre distances less than 150 km, and focal depths less than 30 km from seismological stations in Croatia are used. The value of kappa was estimated from the acceleration amplitude spectrum of shear waves from the slope of the high-frequency part where the spectrum starts to decay rapidly to a noise floor. Kappa models as a function of a site and distance were derived from a standard linear regression of kappa-distance dependence. Site kappa was determined from the extrapolation of the regression line to a zero distance. The preliminary results of site kappa across Croatia are promising. In this research, these results are compared with local site condition parameters for each station, e.g. shear wave velocity in the upper 30 m from geophysical measurements and with existing global shear wave velocity - site kappa values. Spatial distribution of individual kappa’s is compared with the azimuthal distribution of earthquake epicentres. These results are significant for a couple of reasons: to extend the knowledge of the attenuation of near-surface crust layers of the Dinarides and to provide additional information on the local earthquake parameters for updating seismic hazard maps of studied area. Site kappa can be used in the re-creation, and re-calibration of attenuation of peak horizontal and/or vertical acceleration in the Dinarides area since information on the local site conditions were not included in the previous studies.

11. Parameter estimation techniques for LTP system identification

Science.gov (United States)

Nofrarias Serra, Miquel

LISA Pathfinder (LPF) is the precursor mission of LISA (Laser Interferometer Space Antenna) and the first step towards gravitational waves detection in space. The main instrument onboard the mission is the LTP (LISA Technology Package) whose scientific goal is to test LISA's drag-free control loop by reaching a differential acceleration noise level between two masses in √ geodesic motion of 3 × 10-14 ms-2 / Hz in the milliHertz band. The mission is not only challenging in terms of technology readiness but also in terms of data analysis. As with any gravitational wave detector, attaining the instrument performance goals will require an extensive noise hunting campaign to measure all contributions with high accuracy. But, opposite to on-ground experiments, LTP characterisation will be only possible by setting parameters via telecommands and getting a selected amount of information through the available telemetry downlink. These two conditions, high accuracy and high reliability, are the main restrictions that the LTP data analysis must overcome. A dedicated object oriented Matlab Toolbox (LTPDA) has been set up by the LTP analysis team for this purpose. Among the different toolbox methods, an essential part for the mission are the parameter estimation tools that will be used for system identification during operations: Linear Least Squares, Non-linear Least Squares and Monte Carlo Markov Chain methods have been implemented as LTPDA methods. The data analysis team has been testing those methods with a series of mock data exercises with the following objectives: to cross-check parameter estimation methods and compare the achievable accuracy for each of them, and to develop the best strategies to describe the physics underlying a complex controlled experiment as the LTP. In this contribution we describe how these methods were tested with simulated LTP-like data to recover the parameters of the model and we report on the latest results of these mock data exercises.

12. Statistical distributions applications and parameter estimates

CERN Document Server

Thomopoulos, Nick T

2017-01-01

This book gives a description of the group of statistical distributions that have ample application to studies in statistics and probability.  Understanding statistical distributions is fundamental for researchers in almost all disciplines.  The informed researcher will select the statistical distribution that best fits the data in the study at hand.  Some of the distributions are well known to the general researcher and are in use in a wide variety of ways.  Other useful distributions are less understood and are not in common use.  The book describes when and how to apply each of the distributions in research studies, with a goal to identify the distribution that best applies to the study.  The distributions are for continuous, discrete, and bivariate random variables.  In most studies, the parameter values are not known a priori, and sample data is needed to estimate parameter values.  In other scenarios, no sample data is available, and the researcher seeks some insight that allows the estimate of ...

13. Statistical estimation of nuclear reactor dynamic parameters

International Nuclear Information System (INIS)

Cummins, J.D.

1962-02-01

This report discusses the study of the noise in nuclear reactors and associated power plant. The report is divided into three distinct parts. In the first part parameters which influence the dynamic behaviour of some reactors will be specified and their effect on dynamic performance described. Methods of estimating dynamic parameters using statistical signals will be described in detail together with descriptions of the usefulness of the results, the accuracy and related topics. Some experiments which have been and which might be performed on nuclear reactors will be described. In the second part of the report a digital computer programme will be described. The computer programme derives the correlation functions and the spectra of signals. The programme will compute the frequency response both gain and phase for physical items of plant for which simultaneous recordings of input and output signal variations have been made. Estimations of the accuracy of the correlation functions and the spectra may be computed using the programme and the amplitude distribution of signals may also b computed. The programme is written in autocode for the Ferranti Mercury computer. In the third part of the report a practical example of the use of the method and the digital programme is presented. In order to eliminate difficulties of interpretation a very simple plant model was chosen i.e. a simple first order lag. Several interesting properties of statistical signals were measured and will be discussed. (author)

14. Adaptive distributed parameter and input estimation in linear parabolic PDEs

KAUST Repository

Mechhoud, Sarra

2016-01-01

In this paper, we discuss the on-line estimation of distributed source term, diffusion, and reaction coefficients of a linear parabolic partial differential equation using both distributed and interior-point measurements. First, new sufficient identifiability conditions of the input and the parameter simultaneous estimation are stated. Then, by means of Lyapunov-based design, an adaptive estimator is derived in the infinite-dimensional framework. It consists of a state observer and gradient-based parameter and input adaptation laws. The parameter convergence depends on the plant signal richness assumption, whereas the state convergence is established using a Lyapunov approach. The results of the paper are illustrated by simulation on tokamak plasma heat transport model using simulated data.

15. Revisiting Boltzmann learning: parameter estimation in Markov random fields

DEFF Research Database (Denmark)

Hansen, Lars Kai; Andersen, Lars Nonboe; Kjems, Ulrik

1996-01-01

This article presents a generalization of the Boltzmann machine that allows us to use the learning rule for a much wider class of maximum likelihood and maximum a posteriori problems, including both supervised and unsupervised learning. Furthermore, the approach allows us to discuss regularization...... and generalization in the context of Boltzmann machines. We provide an illustrative example concerning parameter estimation in an inhomogeneous Markov field. The regularized adaptation produces a parameter set that closely resembles the “teacher” parameters, hence, will produce segmentations that closely reproduce...

16. On Algebraic Approach for MSD Parametric Estimation

OpenAIRE

Oueslati , Marouene; Thiery , Stéphane; Gibaru , Olivier; Béarée , Richard; Moraru , George

2011-01-01

This article address the identification problem of the natural frequency and the damping ratio of a second order continuous system where the input is a sinusoidal signal. An algebra based approach for identifying parameters of a Mass Spring Damper (MSD) system is proposed and compared to the Kalman-Bucy filter. The proposed estimator uses the algebraic parametric method in the frequency domain yielding exact formula, when placed in the time domain to identify the unknown parameters. We focus ...

17. Parameter Estimation of Spacecraft Fuel Slosh Model

Science.gov (United States)

Gangadharan, Sathya; Sudermann, James; Marlowe, Andrea; Njengam Charles

2004-01-01

Fuel slosh in the upper stages of a spinning spacecraft during launch has been a long standing concern for the success of a space mission. Energy loss through the movement of the liquid fuel in the fuel tank affects the gyroscopic stability of the spacecraft and leads to nutation (wobble) which can cause devastating control issues. The rate at which nutation develops (defined by Nutation Time Constant (NTC can be tedious to calculate and largely inaccurate if done during the early stages of spacecraft design. Pure analytical means of predicting the influence of onboard liquids have generally failed. A strong need exists to identify and model the conditions of resonance between nutation motion and liquid modes and to understand the general characteristics of the liquid motion that causes the problem in spinning spacecraft. A 3-D computerized model of the fuel slosh that accounts for any resonant modes found in the experimental testing will allow for increased accuracy in the overall modeling process. Development of a more accurate model of the fuel slosh currently lies in a more generalized 3-D computerized model incorporating masses, springs and dampers. Parameters describing the model include the inertia tensor of the fuel, spring constants, and damper coefficients. Refinement and understanding the effects of these parameters allow for a more accurate simulation of fuel slosh. The current research will focus on developing models of different complexity and estimating the model parameters that will ultimately provide a more realistic prediction of Nutation Time Constant obtained through simulation.

18. Maximum profile likelihood estimation of differential equation parameters through model based smoothing state estimates.

Science.gov (United States)

Campbell, D A; Chkrebtii, O

2013-12-01

Statistical inference for biochemical models often faces a variety of characteristic challenges. In this paper we examine state and parameter estimation for the JAK-STAT intracellular signalling mechanism, which exemplifies the implementation intricacies common in many biochemical inference problems. We introduce an extension to the Generalized Smoothing approach for estimating delay differential equation models, addressing selection of complexity parameters, choice of the basis system, and appropriate optimization strategies. Motivated by the JAK-STAT system, we further extend the generalized smoothing approach to consider a nonlinear observation process with additional unknown parameters, and highlight how the approach handles unobserved states and unevenly spaced observations. The methodology developed is generally applicable to problems of estimation for differential equation models with delays, unobserved states, nonlinear observation processes, and partially observed histories. Crown Copyright © 2013. Published by Elsevier Inc. All rights reserved.

19. Assumptions of the primordial spectrum and cosmological parameter estimation

International Nuclear Information System (INIS)

2011-01-01

The observables of the perturbed universe, cosmic microwave background (CMB) anisotropy and large structures depend on a set of cosmological parameters, as well as the assumed nature of primordial perturbations. In particular, the shape of the primordial power spectrum (PPS) is, at best, a well-motivated assumption. It is known that the assumed functional form of the PPS in cosmological parameter estimation can affect the best-fit-parameters and their relative confidence limits. In this paper, we demonstrate that a specific assumed form actually drives the best-fit parameters into distinct basins of likelihood in the space of cosmological parameters where the likelihood resists improvement via modifications to the PPS. The regions where considerably better likelihoods are obtained allowing free-form PPS lie outside these basins. In the absence of a preferred model of inflation, this raises a concern that current cosmological parameter estimates are strongly prejudiced by the assumed form of PPS. Our results strongly motivate approaches toward simultaneous estimation of the cosmological parameters and the shape of the primordial spectrum from upcoming cosmological data. It is equally important for theorists to keep an open mind towards early universe scenarios that produce features in the PPS. (paper)

20. A Generic Approach to Parameter Control

NARCIS (Netherlands)

Karafotias, G.; Smit, S.K.; Eiben, A.E.

2012-01-01

On-line control of EA parameters is an approach to parameter setting that offers the advantage of values changing during the run. In this paper, we investigate parameter control from a generic and parameter-independent perspective. We propose a generic control mechanism that is targeted to

1. Estimation of object motion parameters from noisy images.

Science.gov (United States)

Broida, T J; Chellappa, R

1986-01-01

An approach is presented for the estimation of object motion parameters based on a sequence of noisy images. The problem considered is that of a rigid body undergoing unknown rotational and translational motion. The measurement data consists of a sequence of noisy image coordinates of two or more object correspondence points. By modeling the object dynamics as a function of time, estimates of the model parameters (including motion parameters) can be extracted from the data using recursive and/or batch techniques. This permits a desired degree of smoothing to be achieved through the use of an arbitrarily large number of images. Some assumptions regarding object structure are presently made. Results are presented for a recursive estimation procedure: the case considered here is that of a sequence of one dimensional images of a two dimensional object. Thus, the object moves in one transverse dimension, and in depth, preserving the fundamental ambiguity of the central projection image model (loss of depth information). An iterated extended Kalman filter is used for the recursive solution. Noise levels of 5-10 percent of the object image size are used. Approximate Cramer-Rao lower bounds are derived for the model parameter estimates as a function of object trajectory and noise level. This approach may be of use in situations where it is difficult to resolve large numbers of object match points, but relatively long sequences of images (10 to 20 or more) are available.

2. Estimating parameters of chaotic systems synchronized by external driving signal

International Nuclear Information System (INIS)

Wu Xiaogang; Wang Zuxi

2007-01-01

Noise-induced synchronization (NIS) has evoked great research interests recently. Two uncoupled identical chaotic systems can achieve complete synchronization (CS) by feeding a common noise with appropriate intensity. Actually, NIS belongs to the category of external feedback control (EFC). The significance of applying EFC in secure communication lies in fact that the trajectory of chaotic systems is disturbed so strongly by external driving signal that phase space reconstruction attack fails. In this paper, however, we propose an approach that can accurately estimate the parameters of the chaotic systems synchronized by external driving signal through chaotic transmitted signal, driving signal and their derivatives. Numerical simulation indicates that this approach can estimate system parameters and external coupling strength under two driving modes in a very rapid manner, which implies that EFC is not superior to other methods in secure communication

3. Gravity Field Parameter Estimation Using QR Factorization

Science.gov (United States)

Klokocnik, J.; Wagner, C. A.; McAdoo, D.; Kostelecky, J.; Bezdek, A.; Novak, P.; Gruber, C.; Marty, J.; Bruinsma, S. L.; Gratton, S.; Balmino, G.; Baboulin, M.

2007-12-01

This study compares the accuracy of the estimated geopotential coefficients when QR factorization is used instead of the classical method applied at our institute, namely the generation of normal equations that are solved by means of Cholesky decomposition. The objective is to evaluate the gain in numerical precision, which is obtained at considerable extra cost in terms of computer resources. Therefore, a significant increase in precision must be realized in order to justify the additional cost. Numerical simulations were done in order to examine the performance of both solution methods. Reference gravity gradients were simulated, using the EIGEN-GL04C gravity field model to degree and order 300, every 3 seconds along a near-circular, polar orbit at 250 km altitude. The simulation spanned a total of 60 days. A polar orbit was selected in this simulation in order to avoid the 'polar gap' problem, which causes inaccurate estimation of the low-order spherical harmonic coefficients. Regularization is required in that case (e.g., the GOCE mission), which is not the subject of the present study. The simulated gravity gradients, to which white noise was added, were then processed with the GINS software package, applying EIGEN-CG03 as the background gravity field model, followed either by the usual normal equation computation or using the QR approach for incremental linear least squares. The accuracy assessment of the gravity field recovery consists in computing the median error degree-variance spectra, accumulated geoid errors, geoid errors due to individual coefficients, and geoid errors calculated on a global grid. The performance, in terms of memory usage, required disk space, and CPU time, of the QR versus the normal equation approach is also evaluated.

4. Global parameter estimation for thermodynamic models of transcriptional regulation.

Science.gov (United States)

Suleimenov, Yerzhan; Ay, Ahmet; Samee, Md Abul Hassan; Dresch, Jacqueline M; Sinha, Saurabh; Arnosti, David N

2013-07-15

Deciphering the mechanisms involved in gene regulation holds the key to understanding the control of central biological processes, including human disease, population variation, and the evolution of morphological innovations. New experimental techniques including whole genome sequencing and transcriptome analysis have enabled comprehensive modeling approaches to study gene regulation. In many cases, it is useful to be able to assign biological significance to the inferred model parameters, but such interpretation should take into account features that affect these parameters, including model construction and sensitivity, the type of fitness calculation, and the effectiveness of parameter estimation. This last point is often neglected, as estimation methods are often selected for historical reasons or for computational ease. Here, we compare the performance of two parameter estimation techniques broadly representative of local and global approaches, namely, a quasi-Newton/Nelder-Mead simplex (QN/NMS) method and a covariance matrix adaptation-evolutionary strategy (CMA-ES) method. The estimation methods were applied to a set of thermodynamic models of gene transcription applied to regulatory elements active in the Drosophila embryo. Measuring overall fit, the global CMA-ES method performed significantly better than the local QN/NMS method on high quality data sets, but this difference was negligible on lower quality data sets with increased noise or on data sets simplified by stringent thresholding. Our results suggest that the choice of parameter estimation technique for evaluation of gene expression models depends both on quality of data, the nature of the models [again, remains to be established] and the aims of the modeling effort. Copyright © 2013 Elsevier Inc. All rights reserved.

5. Estimating cellular parameters through optimization procedures: elementary principles and applications

Directory of Open Access Journals (Sweden)

Akatsuki eKimura

2015-03-01

Full Text Available Construction of quantitative models is a primary goal of quantitative biology, which aims to understand cellular and organismal phenomena in a quantitative manner. In this article, we introduce optimization procedures to search for parameters in a quantitative model that can reproduce experimental data. The aim of optimization is to minimize the sum of squared errors (SSE in a prediction or to maximize likelihood. A (local maximum of likelihood or (local minimum of the SSE can efficiently be identified using gradient approaches. Addition of a stochastic process enables us to identify the global maximum/minimum without becoming trapped in local maxima/minima. Sampling approaches take advantage of increasing computational power to test numerous sets of parameters in order to determine the optimum set. By combining Bayesian inference with gradient or sampling approaches, we can estimate both the optimum parameters and the form of the likelihood function related to the parameters. Finally, we introduce four examples of research that utilize parameter optimization to obtain biological insights from quantified data: transcriptional regulation, bacterial chemotaxis, morphogenesis, and cell cycle regulation. With practical knowledge of parameter optimization, cell and developmental biologists can develop realistic models that reproduce their observations and thus, obtain mechanistic insights into phenomena of interest.

6. Estimating Function Approaches for Spatial Point Processes

Science.gov (United States)

Deng, Chong

Spatial point pattern data consist of locations of events that are often of interest in biological and ecological studies. Such data are commonly viewed as a realization from a stochastic process called spatial point process. To fit a parametric spatial point process model to such data, likelihood-based methods have been widely studied. However, while maximum likelihood estimation is often too computationally intensive for Cox and cluster processes, pairwise likelihood methods such as composite likelihood, Palm likelihood usually suffer from the loss of information due to the ignorance of correlation among pairs. For many types of correlated data other than spatial point processes, when likelihood-based approaches are not desirable, estimating functions have been widely used for model fitting. In this dissertation, we explore the estimating function approaches for fitting spatial point process models. These approaches, which are based on the asymptotic optimal estimating function theories, can be used to incorporate the correlation among data and yield more efficient estimators. We conducted a series of studies to demonstrate that these estmating function approaches are good alternatives to balance the trade-off between computation complexity and estimating efficiency. First, we propose a new estimating procedure that improves the efficiency of pairwise composite likelihood method in estimating clustering parameters. Our approach combines estimating functions derived from pairwise composite likeli-hood estimation and estimating functions that account for correlations among the pairwise contributions. Our method can be used to fit a variety of parametric spatial point process models and can yield more efficient estimators for the clustering parameters than pairwise composite likelihood estimation. We demonstrate its efficacy through a simulation study and an application to the longleaf pine data. Second, we further explore the quasi-likelihood approach on fitting

7. Parameter estimation in fractional diffusion models

CERN Document Server

Kubilius, Kęstutis; Ralchenko, Kostiantyn

2017-01-01

This book is devoted to parameter estimation in diffusion models involving fractional Brownian motion and related processes. For many years now, standard Brownian motion has been (and still remains) a popular model of randomness used to investigate processes in the natural sciences, financial markets, and the economy. The substantial limitation in the use of stochastic diffusion models with Brownian motion is due to the fact that the motion has independent increments, and, therefore, the random noise it generates is “white,” i.e., uncorrelated. However, many processes in the natural sciences, computer networks and financial markets have long-term or short-term dependences, i.e., the correlations of random noise in these processes are non-zero, and slowly or rapidly decrease with time. In particular, models of financial markets demonstrate various kinds of memory and usually this memory is modeled by fractional Brownian diffusion. Therefore, the book constructs diffusion models with memory and provides s...

8. Dual ant colony operational modal analysis parameter estimation method

Science.gov (United States)

Sitarz, Piotr; Powałka, Bartosz

2018-01-01

Operational Modal Analysis (OMA) is a common technique used to examine the dynamic properties of a system. Contrary to experimental modal analysis, the input signal is generated in object ambient environment. Operational modal analysis mainly aims at determining the number of pole pairs and at estimating modal parameters. Many methods are used for parameter identification. Some methods operate in time while others in frequency domain. The former use correlation functions, the latter - spectral density functions. However, while some methods require the user to select poles from a stabilisation diagram, others try to automate the selection process. Dual ant colony operational modal analysis parameter estimation method (DAC-OMA) presents a new approach to the problem, avoiding issues involved in the stabilisation diagram. The presented algorithm is fully automated. It uses deterministic methods to define the interval of estimated parameters, thus reducing the problem to optimisation task which is conducted with dedicated software based on ant colony optimisation algorithm. The combination of deterministic methods restricting parameter intervals and artificial intelligence yields very good results, also for closely spaced modes and significantly varied mode shapes within one measurement point.

9. Pollen parameters estimates of genetic variability among newly ...

African Journals Online (AJOL)

Pollen parameters estimates of genetic variability among newly selected Nigerian roselle (Hibiscus sabdariffa L.) genotypes. ... Estimates of some pollen parameters where used to assess the genetic diversity among ... HOW TO USE AJOL.

10. Estimation of light transport parameters in biological media using ...

Estimation of light transport parameters in biological media using coherent backscattering ... backscattered light for estimating the light transport parameters of biological media has been investigated. ... Pramana – Journal of Physics | News.

11. Time-course window estimator for ordinary differential equations linear in the parameters

NARCIS (Netherlands)

Vujacic, Ivan; Dattner, Itai; Gonzalez, Javier; Wit, Ernst

In many applications obtaining ordinary differential equation descriptions of dynamic processes is scientifically important. In both, Bayesian and likelihood approaches for estimating parameters of ordinary differential equations, the speed and the convergence of the estimation procedure may

12. Parameter and state estimation in nonlinear dynamical systems

Science.gov (United States)

Creveling, Daniel R.

This thesis is concerned with the problem of state and parameter estimation in nonlinear systems. The need to evaluate unknown parameters in models of nonlinear physical, biophysical and engineering systems occurs throughout the development of phenomenological or reduced models of dynamics. When verifying and validating these models, it is important to incorporate information from observations in an efficient manner. Using the idea of synchronization of nonlinear dynamical systems, this thesis develops a framework for presenting data to a candidate model of a physical process in a way that makes efficient use of the measured data while allowing for estimation of the unknown parameters in the model. The approach presented here builds on existing work that uses synchronization as a tool for parameter estimation. Some critical issues of stability in that work are addressed and a practical framework is developed for overcoming these difficulties. The central issue is the choice of coupling strength between the model and data. If the coupling is too strong, the model will reproduce the measured data regardless of the adequacy of the model or correctness of the parameters. If the coupling is too weak, nonlinearities in the dynamics could lead to complex dynamics rendering any cost function comparing the model to the data inadequate for the determination of model parameters. Two methods are introduced which seek to balance the need for coupling with the desire to allow the model to evolve in its natural manner without coupling. One method, 'balanced' synchronization, adds to the synchronization cost function a requirement that the conditional Lyapunov exponents of the model system, conditioned on being driven by the data, remain negative but small in magnitude. Another method allows the coupling between the data and the model to vary in time according to a specific form of differential equation. The coupling dynamics is damped to allow for a tendency toward zero coupling

13. ESTIMATION OF DISTANCES TO STARS WITH STELLAR PARAMETERS FROM LAMOST

Energy Technology Data Exchange (ETDEWEB)

Carlin, Jeffrey L.; Newberg, Heidi Jo [Department of Physics, Applied Physics and Astronomy, Rensselaer Polytechnic Institute, Troy, NY 12180 (United States); Liu, Chao; Deng, Licai; Li, Guangwei; Luo, A-Li; Wu, Yue; Yang, Ming; Zhang, Haotong [Key Lab of Optical Astronomy, National Astronomical Observatories, Chinese Academy of Sciences, Beijing 100012 (China); Beers, Timothy C. [Department of Physics and JINA: Joint Institute for Nuclear Astrophysics, University of Notre Dame, 225 Nieuwland Science Hall, Notre Dame, IN 46556 (United States); Chen, Li; Hou, Jinliang; Smith, Martin C. [Shanghai Astronomical Observatory, 80 Nandan Road, Shanghai 200030 (China); Guhathakurta, Puragra [UCO/Lick Observatory, Department of Astronomy and Astrophysics, University of California, Santa Cruz, CA 95064 (United States); Hou, Yonghui [Nanjing Institute of Astronomical Optics and Technology, National Astronomical Observatories, Chinese Academy of Sciences, Nanjing 210042 (China); Lépine, Sébastien [Department of Physics and Astronomy, Georgia State University, 25 Park Place, Suite 605, Atlanta, GA 30303 (United States); Yanny, Brian [Fermi National Accelerator Laboratory, P.O. Box 500, Batavia, IL 60510 (United States); Zheng, Zheng, E-mail: jeffreylcarlin@gmail.com [Department of Physics and Astronomy, University of Utah, UT 84112 (United States)

2015-07-15

We present a method to estimate distances to stars with spectroscopically derived stellar parameters. The technique is a Bayesian approach with likelihood estimated via comparison of measured parameters to a grid of stellar isochrones, and returns a posterior probability density function for each star’s absolute magnitude. This technique is tailored specifically to data from the Large Sky Area Multi-object Fiber Spectroscopic Telescope (LAMOST) survey. Because LAMOST obtains roughly 3000 stellar spectra simultaneously within each ∼5° diameter “plate” that is observed, we can use the stellar parameters of the observed stars to account for the stellar luminosity function and target selection effects. This removes biasing assumptions about the underlying populations, both due to predictions of the luminosity function from stellar evolution modeling, and from Galactic models of stellar populations along each line of sight. Using calibration data of stars with known distances and stellar parameters, we show that our method recovers distances for most stars within ∼20%, but with some systematic overestimation of distances to halo giants. We apply our code to the LAMOST database, and show that the current precision of LAMOST stellar parameters permits measurements of distances with ∼40% error bars. This precision should improve as the LAMOST data pipelines continue to be refined.

14. Estimating parameters for probabilistic linkage of privacy-preserved datasets.

Science.gov (United States)

Brown, Adrian P; Randall, Sean M; Ferrante, Anna M; Semmens, James B; Boyd, James H

2017-07-10

than the F-measure using calculated probabilities. Further, the threshold estimation yielded results for F-measure that were only slightly below the highest possible for those probabilities. The method appears highly accurate across a spectrum of datasets with varying degrees of error. As there are few alternatives for parameter estimation, the approach is a major step towards providing a complete operational approach for probabilistic linkage of privacy-preserved datasets.

15. A method for model identification and parameter estimation

International Nuclear Information System (INIS)

Bambach, M; Heinkenschloss, M; Herty, M

2013-01-01

We propose and analyze a new method for the identification of a parameter-dependent model that best describes a given system. This problem arises, for example, in the mathematical modeling of material behavior where several competing constitutive equations are available to describe a given material. In this case, the models are differential equations that arise from the different constitutive equations, and the unknown parameters are coefficients in the constitutive equations. One has to determine the best-suited constitutive equations for a given material and application from experiments. We assume that the true model is one of the N possible parameter-dependent models. To identify the correct model and the corresponding parameters, we can perform experiments, where for each experiment we prescribe an input to the system and observe a part of the system state. Our approach consists of two stages. In the first stage, for each pair of models we determine the experiment, i.e. system input and observation, that best differentiates between the two models, and measure the distance between the two models. Then we conduct N(N − 1) or, depending on the approach taken, N(N − 1)/2 experiments and use the result of the experiments as well as the previously computed model distances to determine the true model. We provide sufficient conditions on the model distances and measurement errors which guarantee that our approach identifies the correct model. Given the model, we identify the corresponding model parameters in the second stage. The problem in the second stage is a standard parameter estimation problem and we use a method suitable for the given application. We illustrate our approach on three examples, including one where the models are elliptic partial differential equations with different parameterized right-hand sides and an example where we identify the constitutive equation in a problem from computational viscoplasticity. (paper)

16. METAHEURISTIC OPTIMIZATION METHODS FOR PARAMETERS ESTIMATION OF DYNAMIC SYSTEMS

Directory of Open Access Journals (Sweden)

V. Panteleev Andrei

2017-01-01

Full Text Available The article considers the usage of metaheuristic methods of constrained global optimization: “Big Bang - Big Crunch”, “Fireworks Algorithm”, “Grenade Explosion Method” in parameters of dynamic systems estimation, described with algebraic-differential equations. Parameters estimation is based upon the observation results from mathematical model behavior. Their values are derived after criterion minimization, which describes the total squared error of state vector coordinates from the deduced ones with precise values observation at different periods of time. Paral- lelepiped type restriction is imposed on the parameters values. Used for solving problems, metaheuristic methods of constrained global extremum don’t guarantee the result, but allow to get a solution of a rather good quality in accepta- ble amount of time. The algorithm of using metaheuristic methods is given. Alongside with the obvious methods for solving algebraic-differential equation systems, it is convenient to use implicit methods for solving ordinary differen- tial equation systems. Two ways of solving the problem of parameters evaluation are given, those parameters differ in their mathematical model. In the first example, a linear mathematical model describes the chemical action parameters change, and in the second one, a nonlinear mathematical model describes predator-prey dynamics, which characterize the changes in both kinds’ population. For each of the observed examples there are calculation results from all the three methods of optimization, there are also some recommendations for how to choose methods parameters. The obtained numerical results have demonstrated the efficiency of the proposed approach. The deduced parameters ap- proximate points slightly differ from the best known solutions, which were deduced differently. To refine the results one should apply hybrid schemes that combine classical methods of optimization of zero, first and second orders and

17. Estimation of Snow Parameters from Dual-Wavelength Airborne Radar

Science.gov (United States)

Liao, Liang; Meneghini, Robert; Iguchi, Toshio; Detwiler, Andrew

1997-01-01

Estimation of snow characteristics from airborne radar measurements would complement In-situ measurements. While In-situ data provide more detailed information than radar, they are limited in their space-time sampling. In the absence of significant cloud water contents, dual-wavelength radar data can be used to estimate 2 parameters of a drop size distribution if the snow density is assumed. To estimate, rather than assume, a snow density is difficult, however, and represents a major limitation in the radar retrieval. There are a number of ways that this problem can be investigated: direct comparisons with in-situ measurements, examination of the large scale characteristics of the retrievals and their comparison to cloud model outputs, use of LDR measurements, and comparisons to the theoretical results of Passarelli(1978) and others. In this paper we address the first approach and, in part, the second.

18. Multi-objective optimization in quantum parameter estimation

Science.gov (United States)

Gong, BeiLi; Cui, Wei

2018-04-01

We investigate quantum parameter estimation based on linear and Kerr-type nonlinear controls in an open quantum system, and consider the dissipation rate as an unknown parameter. We show that while the precision of parameter estimation is improved, it usually introduces a significant deformation to the system state. Moreover, we propose a multi-objective model to optimize the two conflicting objectives: (1) maximizing the Fisher information, improving the parameter estimation precision, and (2) minimizing the deformation of the system state, which maintains its fidelity. Finally, simulations of a simplified ɛ-constrained model demonstrate the feasibility of the Hamiltonian control in improving the precision of the quantum parameter estimation.

19. Estimation of Poisson-Dirichlet Parameters with Monotone Missing Data

Directory of Open Access Journals (Sweden)

Xueqin Zhou

2017-01-01

Full Text Available This article considers the estimation of the unknown numerical parameters and the density of the base measure in a Poisson-Dirichlet process prior with grouped monotone missing data. The numerical parameters are estimated by the method of maximum likelihood estimates and the density function is estimated by kernel method. A set of simulations was conducted, which shows that the estimates perform well.

20. Applicability of genetic algorithms to parameter estimation of economic models

Directory of Open Access Journals (Sweden)

Marcel Ševela

2004-01-01

Full Text Available The paper concentrates on capability of genetic algorithms for parameter estimation of non-linear economic models. In the paper we test the ability of genetic algorithms to estimate of parameters of demand function for durable goods and simultaneously search for parameters of genetic algorithm that lead to maximum effectiveness of the computation algorithm. The genetic algorithms connect deterministic iterative computation methods with stochastic methods. In the genteic aůgorithm approach each possible solution is represented by one individual, those life and lifes of all generations of individuals run under a few parameter of genetic algorithm. Our simulations resulted in optimal mutation rate of 15% of all bits in chromosomes, optimal elitism rate 20%. We can not set the optimal extend of generation, because it proves positive correlation with effectiveness of genetic algorithm in all range under research, but its impact is degreasing. The used genetic algorithm was sensitive to mutation rate at most, than to extend of generation. The sensitivity to elitism rate is not so strong.

1. Statistical methods of parameter estimation for deterministically chaotic time series

Science.gov (United States)

Pisarenko, V. F.; Sornette, D.

2004-03-01

We discuss the possibility of applying some standard statistical methods (the least-square method, the maximum likelihood method, and the method of statistical moments for estimation of parameters) to deterministically chaotic low-dimensional dynamic system (the logistic map) containing an observational noise. A “segmentation fitting” maximum likelihood (ML) method is suggested to estimate the structural parameter of the logistic map along with the initial value x1 considered as an additional unknown parameter. The segmentation fitting method, called “piece-wise” ML, is similar in spirit but simpler and has smaller bias than the “multiple shooting” previously proposed. Comparisons with different previously proposed techniques on simulated numerical examples give favorable results (at least, for the investigated combinations of sample size N and noise level). Besides, unlike some suggested techniques, our method does not require the a priori knowledge of the noise variance. We also clarify the nature of the inherent difficulties in the statistical analysis of deterministically chaotic time series and the status of previously proposed Bayesian approaches. We note the trade off between the need of using a large number of data points in the ML analysis to decrease the bias (to guarantee consistency of the estimation) and the unstable nature of dynamical trajectories with exponentially fast loss of memory of the initial condition. The method of statistical moments for the estimation of the parameter of the logistic map is discussed. This method seems to be the unique method whose consistency for deterministically chaotic time series is proved so far theoretically (not only numerically).

2. Chloramine demand estimation using surrogate chemical and microbiological parameters.

Science.gov (United States)

Moradi, Sina; Liu, Sanly; Chow, Christopher W K; van Leeuwen, John; Cook, David; Drikas, Mary; Amal, Rose

2017-07-01

A model is developed to enable estimation of chloramine demand in full scale drinking water supplies based on chemical and microbiological factors that affect chloramine decay rate via nonlinear regression analysis method. The model is based on organic character (specific ultraviolet absorbance (SUVA)) of the water samples and a laboratory measure of the microbiological (F m ) decay of chloramine. The applicability of the model for estimation of chloramine residual (and hence chloramine demand) was tested on several waters from different water treatment plants in Australia through statistical test analysis between the experimental and predicted data. Results showed that the model was able to simulate and estimate chloramine demand at various times in real drinking water systems. To elucidate the loss of chloramine over the wide variation of water quality used in this study, the model incorporates both the fast and slow chloramine decay pathways. The significance of estimated fast and slow decay rate constants as the kinetic parameters of the model for three water sources in Australia was discussed. It was found that with the same water source, the kinetic parameters remain the same. This modelling approach has the potential to be used by water treatment operators as a decision support tool in order to manage chloramine disinfection. Copyright © 2017. Published by Elsevier B.V.

3. Model parameters estimation and sensitivity by genetic algorithms

International Nuclear Information System (INIS)

Marseguerra, Marzio; Zio, Enrico; Podofillini, Luca

2003-01-01

In this paper we illustrate the possibility of extracting qualitative information on the importance of the parameters of a model in the course of a Genetic Algorithms (GAs) optimization procedure for the estimation of such parameters. The Genetic Algorithms' search of the optimal solution is performed according to procedures that resemble those of natural selection and genetics: an initial population of alternative solutions evolves within the search space through the four fundamental operations of parent selection, crossover, replacement, and mutation. During the search, the algorithm examines a large amount of solution points which possibly carries relevant information on the underlying model characteristics. A possible utilization of this information amounts to create and update an archive with the set of best solutions found at each generation and then to analyze the evolution of the statistics of the archive along the successive generations. From this analysis one can retrieve information regarding the speed of convergence and stabilization of the different control (decision) variables of the optimization problem. In this work we analyze the evolution strategy followed by a GA in its search for the optimal solution with the aim of extracting information on the importance of the control (decision) variables of the optimization with respect to the sensitivity of the objective function. The study refers to a GA search for optimal estimates of the effective parameters in a lumped nuclear reactor model of literature. The supporting observation is that, as most optimization procedures do, the GA search evolves towards convergence in such a way to stabilize first the most important parameters of the model and later those which influence little the model outputs. In this sense, besides estimating efficiently the parameters values, the optimization approach also allows us to provide a qualitative ranking of their importance in contributing to the model output. The

4. Parameter estimation and testing of hypotheses

International Nuclear Information System (INIS)

Fruhwirth, R.

1996-01-01

This lecture presents the basic mathematical ideas underlying the concept of random variable and the construction and analysis of estimators and test statistics. The material presented is based mainly on four books given in the references: the general exposition of estimators and test statistics follows Kendall and Stuart which is a comprehensive review of the field; the book by Eadie et al. contains selecting topics of particular interest to experimental physicist and a host of illuminating examples from experimental high-energy physics; for the presentation of numerical procedures, the Press et al. and the Thisted books have been used. The last section deals with estimation in dynamic systems. In most books the Kalman filter is presented in a Bayesian framework, often obscured by cumbrous notation. In this lecture, the link to classical least-squares estimators and regression models is stressed with the aim of facilitating the access to this less familiar topic. References are given for specific applications to track and vertex fitting and for extended exposition of these topics. In the appendix, the link between Bayesian decision rules and feed-forward neural networks is presented. (J.S.). 10 refs., 5 figs., 1 appendix

5. Parameter estimation in tree graph metabolic networks

NARCIS (Netherlands)

Astola, Laura; Stigter, Hans; Gomez Roldan, Maria Victoria; Eeuwijk, van Fred; Hall, Robert D.; Groenenboom, Marian; Molenaar, Jaap J.

2016-01-01

We study the glycosylation processes that convert initially toxic substrates to nu- tritionally valuable metabolites in the flavonoid biosynthesis pathway of tomato (Solanum lycopersicum) seedlings. To estimate the reaction rates we use ordinary differential equations (ODEs) to model the enzyme

6. Neglect Of Parameter Estimation Uncertainty Can Significantly Overestimate Structural Reliability

Directory of Open Access Journals (Sweden)

Rózsás Árpád

2015-12-01

Full Text Available Parameter estimation uncertainty is often neglected in reliability studies, i.e. point estimates of distribution parameters are used for representative fractiles, and in probabilistic models. A numerical example examines the effect of this uncertainty on structural reliability using Bayesian statistics. The study reveals that the neglect of parameter estimation uncertainty might lead to an order of magnitude underestimation of failure probability.

7. Transport parameter estimation from lymph measurements and the Patlak equation.

Science.gov (United States)

Watson, P D; Wolf, M B

1992-01-01

Two methods of estimating protein transport parameters for plasma-to-lymph transport data are presented. Both use IBM-compatible computers to obtain least-squares parameters for the solvent drag reflection coefficient and the permeability-surface area product using the Patlak equation. A matrix search approach is described, and the speed and convenience of this are compared with a commercially available gradient method. The results from both of these methods were different from those of a method reported by Reed, Townsley, and Taylor [Am. J. Physiol. 257 (Heart Circ. Physiol. 26): H1037-H1041, 1989]. It is shown that the Reed et al. method contains a systematic error. It is also shown that diffusion always plays an important role for transmembrane transport at the exit end of a membrane channel under all conditions of lymph flow rate and that the statement that diffusion becomes zero at high lymph flow rate depends on a mathematical definition of diffusion.

8. Averaging models: parameters estimation with the R-Average procedure

Directory of Open Access Journals (Sweden)

S. Noventa

2010-01-01

Full Text Available The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982, can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto & Vicentini, 2007 can be used to estimate the parameters of these models. By the use of multiple information criteria in the model selection procedure, R-Average allows for the identification of the best subset of parameters that account for the data. After a review of the general method, we present an implementation of the procedure in the framework of R-project, followed by some experiments using a Monte Carlo method.

9. Synchronization and parameter estimations of an uncertain Rikitake system

International Nuclear Information System (INIS)

Aguilar-Ibanez, Carlos; Martinez-Guerra, Rafael; Aguilar-Lopez, Ricardo; Mata-Machuca, Juan L.

2010-01-01

In this Letter we address the synchronization and parameter estimation of the uncertain Rikitake system, under the assumption the state is partially known. To this end we use the master/slave scheme in conjunction with the adaptive control technique. Our control approach consists of proposing a slave system which has to follow asymptotically the uncertain Rikitake system, refereed as the master system. The gains of the slave system are adjusted continually according to a convenient adaptation control law, until the measurable output errors converge to zero. The convergence analysis is carried out by using the Barbalat's Lemma. Under this context, uncertainty means that although the system structure is known, only a partial knowledge of the corresponding parameter values is available.

10. minimum variance estimation of yield parameters of rubber tree

African Journals Online (AJOL)

2013-03-01

Mar 1, 2013 ... It is our opinion that Kalman filter is a robust estimator of the ... Kalman filter, parameter estimation, rubber clones, Chow failure test, autocorrelation, STAMP, data ...... Mills, T.C. Modelling Current Temperature Trends.

11. Compressive Parameter Estimation for Sparse Translation-Invariant Signals Using Polar Interpolation

DEFF Research Database (Denmark)

Fyhn, Karsten; Duarte, Marco F.; Jensen, Søren Holdt

2015-01-01

We propose new compressive parameter estimation algorithms that make use of polar interpolation to improve the estimator precision. Our work extends previous approaches involving polar interpolation for compressive parameter estimation in two aspects: (i) we extend the formulation from real non...... to attain good estimation precision and keep the computational complexity low. Our numerical experiments show that the proposed algorithms outperform existing approaches that either leverage polynomial interpolation or are based on a conversion to a frequency-estimation problem followed by a super...... interpolation increases the estimation precision....

12. Estimation of a collision impact parameter

International Nuclear Information System (INIS)

Shmatov, S.V.; Zarubin, P.I.

2001-01-01

We demonstrate that the nuclear collision geometry (i.e. impact parameter) can be determined in an event-by-event analysis by measuring the transverse energy flow in the pseudorapidity region 3≤|η|≤5 with a minimal dependence on collision dynamics details at the LHC energy scale. Using the HIJING model we have illustrated our calculation by a simulation of events of nucleus-nucleus interactions at the c.m.s. energy from 1 up to 5.5 TeV per nucleon and various types of nuclei

13. Application of isotopic information for estimating parameters in Philip infiltration model

Directory of Open Access Journals (Sweden)

Tao Wang

2016-10-01

Full Text Available Minimizing parameter uncertainty is crucial in the application of hydrologic models. Isotopic information in various hydrologic components of the water cycle can expand our knowledge of the dynamics of water flow in the system, provide additional information for parameter estimation, and improve parameter identifiability. This study combined the Philip infiltration model with an isotopic mixing model using an isotopic mass balance approach for estimating parameters in the Philip infiltration model. Two approaches to parameter estimation were compared: (a using isotopic information to determine the soil water transmission and then hydrologic information to estimate the soil sorptivity, and (b using hydrologic information to determine the soil water transmission and the soil sorptivity. Results of parameter estimation were verified through a rainfall infiltration experiment in a laboratory under rainfall with constant isotopic compositions and uniform initial soil water content conditions. Experimental results showed that approach (a, using isotopic and hydrologic information, estimated the soil water transmission in the Philip infiltration model in a manner that matched measured values well. The results of parameter estimation of approach (a were better than those of approach (b. It was also found that the analytical precision of hydrogen and oxygen stable isotopes had a significant effect on parameter estimation using isotopic information.

14. Statistical estimation of ultrasonic propagation path parameters for aberration correction.

Science.gov (United States)

Waag, Robert C; Astheimer, Jeffrey P

2005-05-01

Parameters in a linear filter model for ultrasonic propagation are found using statistical estimation. The model uses an inhomogeneous-medium Green's function that is decomposed into a homogeneous-transmission term and a path-dependent aberration term. Power and cross-power spectra of random-medium scattering are estimated over the frequency band of the transmit-receive system by using closely situated scattering volumes. The frequency-domain magnitude of the aberration is obtained from a normalization of the power spectrum. The corresponding phase is reconstructed from cross-power spectra of subaperture signals at adjacent receive positions by a recursion. The subapertures constrain the receive sensitivity pattern to eliminate measurement system phase contributions. The recursion uses a Laplacian-based algorithm to obtain phase from phase differences. Pulse-echo waveforms were acquired from a point reflector and a tissue-like scattering phantom through a tissue-mimicking aberration path from neighboring volumes having essentially the same aberration path. Propagation path aberration parameters calculated from the measurements of random scattering through the aberration phantom agree with corresponding parameters calculated for the same aberrator and array position by using echoes from the point reflector. The results indicate the approach describes, in addition to time shifts, waveform amplitude and shape changes produced by propagation through distributed aberration under realistic conditions.

15. Parameter Estimation for Improving Association Indicators in Binary Logistic Regression

Directory of Open Access Journals (Sweden)

Mahdi Bashiri

2012-02-01

Full Text Available The aim of this paper is estimation of Binary logistic regression parameters for maximizing the log-likelihood function with improved association indicators. In this paper the parameter estimation steps have been explained and then measures of association have been introduced and their calculations have been analyzed. Moreover a new related indicators based on membership degree level have been expressed. Indeed association measures demonstrate the number of success responses occurred in front of failure in certain number of Bernoulli independent experiments. In parameter estimation, existing indicators values is not sensitive to the parameter values, whereas the proposed indicators are sensitive to the estimated parameters during the iterative procedure. Therefore, proposing a new association indicator of binary logistic regression with more sensitivity to the estimated parameters in maximizing the log- likelihood in iterative procedure is innovation of this study.

16. NEWBOX: A computer program for parameter estimation in diffusion problems

International Nuclear Information System (INIS)

Nestor, C.W. Jr.; Godbee, H.W.; Joy, D.S.

1989-01-01

In the analysis of experiments to determine amounts of material transferred form 1 medium to another (e.g., the escape of chemically hazardous and radioactive materials from solids), there are at least 3 important considerations. These are (1) is the transport amenable to treatment by established mass transport theory; (2) do methods exist to find estimates of the parameters which will give a best fit, in some sense, to the experimental data; and (3) what computational procedures are available for evaluating the theoretical expressions. The authors have made the assumption that established mass transport theory is an adequate model for the situations under study. Since the solutions of the diffusion equation are usually nonlinear in some parameters (diffusion coefficient, reaction rate constants, etc.), use of a method of parameter adjustment involving first partial derivatives can be complicated and prone to errors in the computation of the derivatives. In addition, the parameters must satisfy certain constraints; for example, the diffusion coefficient must remain positive. For these reasons, a variant of the constrained simplex method of M. J. Box has been used to estimate parameters. It is similar, but not identical, to the downhill simplex method of Nelder and Mead. In general, they calculate the fraction of material transferred as a function of time from expressions obtained by the inversion of the Laplace transform of the fraction transferred, rather than by taking derivatives of a calculated concentration profile. With the above approaches to the 3 considerations listed at the outset, they developed a computer program NEWBOX, usable on a personal computer, to calculate the fractional release of material from 4 different geometrical shapes (semi-infinite medium, finite slab, finite circular cylinder, and sphere), accounting for several different boundary conditions

17. Estimation of gloss from rough surface parameters

Science.gov (United States)

Simonsen, Ingve; Larsen, Åge G.; Andreassen, Erik; Ommundsen, Espen; Nord-Varhaug, Katrin

2005-12-01

Gloss is a quantity used in the optical industry to quantify and categorize materials according to how well they scatter light specularly. With the aid of phase perturbation theory, we derive an approximate expression for this quantity for a one-dimensional randomly rough surface. It is demonstrated that gloss depends in an exponential way on two dimensionless quantities that are associated with the surface randomness: the root-mean-square roughness times the perpendicular momentum transfer for the specular direction, and a correlation function dependent factor times a lateral momentum variable associated with the collection angle. Rigorous Monte Carlo simulations are used to access the quality of this approximation, and good agreement is observed over large regions of parameter space.

18. Control and Estimation of Distributed Parameter Systems

CERN Document Server

Kappel, F; Kunisch, K

1998-01-01

Consisting of 23 refereed contributions, this volume offers a broad and diverse view of current research in control and estimation of partial differential equations. Topics addressed include, but are not limited to - control and stability of hyperbolic systems related to elasticity, linear and nonlinear; - control and identification of nonlinear parabolic systems; - exact and approximate controllability, and observability; - Pontryagin's maximum principle and dynamic programming in PDE; and - numerics pertinent to optimal and suboptimal control problems. This volume is primarily geared toward control theorists seeking information on the latest developments in their area of expertise. It may also serve as a stimulating reader to any researcher who wants to gain an impression of activities at the forefront of a vigorously expanding area in applied mathematics.

19. Nonlinear adaptive control system design with asymptotically stable parameter estimation error

Science.gov (United States)

Mishkov, Rumen; Darmonski, Stanislav

2018-01-01

The paper presents a new general method for nonlinear adaptive system design with asymptotic stability of the parameter estimation error. The advantages of the approach include asymptotic unknown parameter estimation without persistent excitation and capability to directly control the estimates transient response time. The method proposed modifies the basic parameter estimation dynamics designed via a known nonlinear adaptive control approach. The modification is based on the generalised prediction error, a priori constraints with a hierarchical parameter projection algorithm, and the stable data accumulation concepts. The data accumulation principle is the main tool for achieving asymptotic unknown parameter estimation. It relies on the parametric identifiability system property introduced. Necessary and sufficient conditions for exponential stability of the data accumulation dynamics are derived. The approach is applied in a nonlinear adaptive speed tracking vector control of a three-phase induction motor.

20. Bias-Corrected Estimation of Noncentrality Parameters of Covariance Structure Models

Science.gov (United States)

Raykov, Tenko

2005-01-01

A bias-corrected estimator of noncentrality parameters of covariance structure models is discussed. The approach represents an application of the bootstrap methodology for purposes of bias correction, and utilizes the relation between average of resample conventional noncentrality parameter estimates and their sample counterpart. The…

1. Maximum-likelihood estimation of the hyperbolic parameters from grouped observations

DEFF Research Database (Denmark)

Jensen, Jens Ledet

1988-01-01

a least-squares problem. The second procedure Hypesti first approaches the maximum-likelihood estimate by iterating in the profile-log likelihood function for the scale parameter. Close to the maximum of the likelihood function, the estimation is brought to an end by iteration, using all four parameters...

2. Estimating 3D Object Parameters from 2D Grey-Level Images

NARCIS (Netherlands)

Houkes, Z.

2000-01-01

This thesis describes a general framework for parameter estimation, which is suitable for computer vision applications. The approach described combines 3D modelling, animation and estimation tools to determine parameters of objects in a scene from 2D grey-level images. The animation tool predicts

3. Parameter Estimates in Differential Equation Models for Chemical Kinetics

Science.gov (United States)

Winkel, Brian

2011-01-01

We discuss the need for devoting time in differential equations courses to modelling and the completion of the modelling process with efforts to estimate the parameters in the models using data. We estimate the parameters present in several differential equation models of chemical reactions of order n, where n = 0, 1, 2, and apply more general…

4. Estimation of ground water hydraulic parameters

Energy Technology Data Exchange (ETDEWEB)

Hvilshoej, Soeren

1998-11-01

The main objective was to assess field methods to determine ground water hydraulic parameters and to develop and apply new analysis methods to selected field techniques. A field site in Vejen, Denmark, which previously has been intensively investigated on the basis of a large amount of mini slug tests and tracer tests, was chosen for experimental application and evaluation. Particular interest was in analysing partially penetrating pumping tests and a recently proposed single-well dipole test. Three wells were constructed in which partially penetrating pumping tests and multi-level single-well dipole tests were performed. In addition, multi-level slug tests, flow meter tests, gamma-logs, and geologic characterisation of soil samples were carried out. In addition to the three Vejen analyses, data from previously published partially penetrating pumping tests were analysed assuming homogeneous anisotropic aquifer conditions. In the present study methods were developed to analyse partially penetrating pumping tests and multi-level single-well dipole tests based on an inverse numerical model. The obtained horizontal hydraulic conductivities from the partially penetrating pumping tests were in accordance with measurements obtained from multi-level slug tests and mini slug tests. Accordance was also achieved between the anisotropy ratios determined from partially penetrating pumping tests and multi-level single-well dipole tests. It was demonstrated that the partially penetrating pumping test analysed by and inverse numerical model is a very valuable technique that may provide hydraulic information on the storage terms and the vertical distribution of the horizontal and vertical hydraulic conductivity under both confined and unconfined aquifer conditions. (EG) 138 refs.

5. Bayesian Parameter Estimation for Heavy-Duty Vehicles

Energy Technology Data Exchange (ETDEWEB)

Miller, Eric; Konan, Arnaud; Duran, Adam

2017-03-28

Accurate vehicle parameters are valuable for design, modeling, and reporting. Estimating vehicle parameters can be a very time-consuming process requiring tightly-controlled experimentation. This work describes a method to estimate vehicle parameters such as mass, coefficient of drag/frontal area, and rolling resistance using data logged during standard vehicle operation. The method uses Monte Carlo to generate parameter sets which is fed to a variant of the road load equation. Modeled road load is then compared to measured load to evaluate the probability of the parameter set. Acceptance of a proposed parameter set is determined using the probability ratio to the current state, so that the chain history will give a distribution of parameter sets. Compared to a single value, a distribution of possible values provides information on the quality of estimates and the range of possible parameter values. The method is demonstrated by estimating dynamometer parameters. Results confirm the method's ability to estimate reasonable parameter sets, and indicates an opportunity to increase the certainty of estimates through careful selection or generation of the test drive cycle.

6. Automated Modal Parameter Estimation of Civil Engineering Structures

DEFF Research Database (Denmark)

Andersen, Palle; Brincker, Rune; Goursat, Maurice

In this paper the problems of doing automatic modal parameter extraction of ambient excited civil engineering structures is considered. Two different approaches for obtaining the modal parameters automatically are presented: The Frequency Domain Decomposition (FDD) technique and a correlation...

7. Parameter and State Estimator for State Space Models

Directory of Open Access Journals (Sweden)

Ruifeng Ding

2014-01-01

Full Text Available This paper proposes a parameter and state estimator for canonical state space systems from measured input-output data. The key is to solve the system state from the state equation and to substitute it into the output equation, eliminating the state variables, and the resulting equation contains only the system inputs and outputs, and to derive a least squares parameter identification algorithm. Furthermore, the system states are computed from the estimated parameters and the input-output data. Convergence analysis using the martingale convergence theorem indicates that the parameter estimates converge to their true values. Finally, an illustrative example is provided to show that the proposed algorithm is effective.

8. Parameter estimation and prediction of nonlinear biological systems: some examples

NARCIS (Netherlands)

Doeswijk, T.G.; Keesman, K.J.

2006-01-01

Rearranging and reparameterizing a discrete-time nonlinear model with polynomial quotient structure in input, output and parameters (xk = f(Z, p)) leads to a model linear in its (new) parameters. As a result, the parameter estimation problem becomes a so-called errors-in-variables problem for which

9. Recursive Parameter Identification for Estimating and Displaying Maneuvering Vessel Path

National Research Council Canada - National Science Library

Pullard, Stephen

2003-01-01

...). The extended least squares (ELS) parameter identification approach allows the system to be installed on most platforms without prior knowledge of system dynamics provided vessel states are available...

10. A Novel Nonlinear Parameter Estimation Method of Soft Tissues

Directory of Open Access Journals (Sweden)

Qianqian Tong

2017-12-01

Full Text Available The elastic parameters of soft tissues are important for medical diagnosis and virtual surgery simulation. In this study, we propose a novel nonlinear parameter estimation method for soft tissues. Firstly, an in-house data acquisition platform was used to obtain external forces and their corresponding deformation values. To provide highly precise data for estimating nonlinear parameters, the measured forces were corrected using the constructed weighted combination forecasting model based on a support vector machine (WCFM_SVM. Secondly, a tetrahedral finite element parameter estimation model was established to describe the physical characteristics of soft tissues, using the substitution parameters of Young’s modulus and Poisson’s ratio to avoid solving complicated nonlinear problems. To improve the robustness of our model and avoid poor local minima, the initial parameters solved by a linear finite element model were introduced into the parameter estimation model. Finally, a self-adapting Levenberg–Marquardt (LM algorithm was presented, which is capable of adaptively adjusting iterative parameters to solve the established parameter estimation model. The maximum absolute error of our WCFM_SVM model was less than 0.03 Newton, resulting in more accurate forces in comparison with other correction models tested. The maximum absolute error between the calculated and measured nodal displacements was less than 1.5 mm, demonstrating that our nonlinear parameters are precise.

11. Approaches to estimating decommissioning costs

International Nuclear Information System (INIS)

Smith, R.I.

1990-07-01

The chronological development of methodology for estimating the cost of nuclear reactor power station decommissioning is traced from the mid-1970s through 1990. Three techniques for developing decommissioning cost estimates are described. The two viable techniques are compared by examining estimates developed for the same nuclear power station using both methods. The comparison shows that the differences between the estimates are due largely to differing assumptions regarding the size of the utility and operating contractor overhead staffs. It is concluded that the two methods provide bounding estimates on a range of manageable costs, and provide reasonable bases for the utility rate adjustments necessary to pay for future decommissioning costs. 6 refs

12. Automated modal parameter estimation using correlation analysis and bootstrap sampling

Science.gov (United States)

Yaghoubi, Vahid; Vakilzadeh, Majid K.; Abrahamsson, Thomas J. S.

2018-02-01

The estimation of modal parameters from a set of noisy measured data is a highly judgmental task, with user expertise playing a significant role in distinguishing between estimated physical and noise modes of a test-piece. Various methods have been developed to automate this procedure. The common approach is to identify models with different orders and cluster similar modes together. However, most proposed methods based on this approach suffer from high-dimensional optimization problems in either the estimation or clustering step. To overcome this problem, this study presents an algorithm for autonomous modal parameter estimation in which the only required optimization is performed in a three-dimensional space. To this end, a subspace-based identification method is employed for the estimation and a non-iterative correlation-based method is used for the clustering. This clustering is at the heart of the paper. The keys to success are correlation metrics that are able to treat the problems of spatial eigenvector aliasing and nonunique eigenvectors of coalescent modes simultaneously. The algorithm commences by the identification of an excessively high-order model from frequency response function test data. The high number of modes of this model provides bases for two subspaces: one for likely physical modes of the tested system and one for its complement dubbed the subspace of noise modes. By employing the bootstrap resampling technique, several subsets are generated from the same basic dataset and for each of them a model is identified to form a set of models. Then, by correlation analysis with the two aforementioned subspaces, highly correlated modes of these models which appear repeatedly are clustered together and the noise modes are collected in a so-called Trashbox cluster. Stray noise modes attracted to the mode clusters are trimmed away in a second step by correlation analysis. The final step of the algorithm is a fuzzy c-means clustering procedure applied to

13. Estimating negative binomial parameters from occurrence data with detection times.

Science.gov (United States)

Hwang, Wen-Han; Huggins, Richard; Stoklosa, Jakub

2016-11-01

The negative binomial distribution is a common model for the analysis of count data in biology and ecology. In many applications, we may not observe the complete frequency count in a quadrat but only that a species occurred in the quadrat. If only occurrence data are available then the two parameters of the negative binomial distribution, the aggregation index and the mean, are not identifiable. This can be overcome by data augmentation or through modeling the dependence between quadrat occupancies. Here, we propose to record the (first) detection time while collecting occurrence data in a quadrat. We show that under what we call proportionate sampling, where the time to survey a region is proportional to the area of the region, that both negative binomial parameters are estimable. When the mean parameter is larger than two, our proposed approach is more efficient than the data augmentation method developed by Solow and Smith (, Am. Nat. 176, 96-98), and in general is cheaper to conduct. We also investigate the effect of misidentification when collecting negative binomially distributed data, and conclude that, in general, the effect can be simply adjusted for provided that the mean and variance of misidentification probabilities are known. The results are demonstrated in a simulation study and illustrated in several real examples. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

14. A quasi-sequential parameter estimation for nonlinear dynamic systems based on multiple data profiles

International Nuclear Information System (INIS)

Zhao, Chao; Vu, Quoc Dong; Li, Pu

2013-01-01

A three-stage computation framework for solving parameter estimation problems for dynamic systems with multiple data profiles is developed. The dynamic parameter estimation problem is transformed into a nonlinear programming (NLP) problem by using collocation on finite elements. The model parameters to be estimated are treated in the upper stage by solving an NLP problem. The middle stage consists of multiple NLP problems nested in the upper stage, representing the data reconciliation step for each data profile. We use the quasi-sequential dynamic optimization approach to solve these problems. In the lower stage, the state variables and their gradients are evaluated through ntegrating the model equations. Since the second-order derivatives are not required in the computation framework this proposed method will be efficient for solving nonlinear dynamic parameter estimation problems. The computational results obtained on a parameter estimation problem for two CSTR models demonstrate the effectiveness of the proposed approach

15. A quasi-sequential parameter estimation for nonlinear dynamic systems based on multiple data profiles

Energy Technology Data Exchange (ETDEWEB)

Zhao, Chao [FuZhou University, FuZhou (China); Vu, Quoc Dong; Li, Pu [Ilmenau University of Technology, Ilmenau (Germany)

2013-02-15

A three-stage computation framework for solving parameter estimation problems for dynamic systems with multiple data profiles is developed. The dynamic parameter estimation problem is transformed into a nonlinear programming (NLP) problem by using collocation on finite elements. The model parameters to be estimated are treated in the upper stage by solving an NLP problem. The middle stage consists of multiple NLP problems nested in the upper stage, representing the data reconciliation step for each data profile. We use the quasi-sequential dynamic optimization approach to solve these problems. In the lower stage, the state variables and their gradients are evaluated through ntegrating the model equations. Since the second-order derivatives are not required in the computation framework this proposed method will be efficient for solving nonlinear dynamic parameter estimation problems. The computational results obtained on a parameter estimation problem for two CSTR models demonstrate the effectiveness of the proposed approach.

16. Robust Parameter and Signal Estimation in Induction Motors

DEFF Research Database (Denmark)

Børsting, H.

This thesis deals with theories and methods for robust parameter and signal estimation in induction motors. The project originates in industrial interests concerning sensor-less control of electrical drives. During the work, some general problems concerning estimation of signals and parameters...... in nonlinear systems, have been exposed. The main objectives of this project are: - analysis and application of theories and methods for robust estimation of parameters in a model structure, obtained from knowledge of the physics of the induction motor. - analysis and application of theories and methods...... for robust estimation of the rotor speed and driving torque of the induction motor based only on measurements of stator voltages and currents. Only contimuous-time models have been used, which means that physical related signals and parameters are estimated directly and not indirectly by some discrete...

17. Modeling and Parameter Estimation of a Small Wind Generation System

Directory of Open Access Journals (Sweden)

Carlos A. Ramírez Gómez

2013-11-01

Full Text Available The modeling and parameter estimation of a small wind generation system is presented in this paper. The system consists of a wind turbine, a permanent magnet synchronous generator, a three phase rectifier, and a direct current load. In order to estimate the parameters wind speed data are registered in a weather station located in the Fraternidad Campus at ITM. Wind speed data were applied to a reference model programed with PSIM software. From that simulation, variables were registered to estimate the parameters. The wind generation system model together with the estimated parameters is an excellent representation of the detailed model, but the estimated model offers a higher flexibility than the programed model in PSIM software.

18. Learn-as-you-go acceleration of cosmological parameter estimates

International Nuclear Information System (INIS)

Aslanyan, Grigor; Easther, Richard; Price, Layne C.

2015-01-01

Cosmological analyses can be accelerated by approximating slow calculations using a training set, which is either precomputed or generated dynamically. However, this approach is only safe if the approximations are well understood and controlled. This paper surveys issues associated with the use of machine-learning based emulation strategies for accelerating cosmological parameter estimation. We describe a learn-as-you-go algorithm that is implemented in the Cosmo++ code and (1) trains the emulator while simultaneously estimating posterior probabilities; (2) identifies unreliable estimates, computing the exact numerical likelihoods if necessary; and (3) progressively learns and updates the error model as the calculation progresses. We explicitly describe and model the emulation error and show how this can be propagated into the posterior probabilities. We apply these techniques to the Planck likelihood and the calculation of ΛCDM posterior probabilities. The computation is significantly accelerated without a pre-defined training set and uncertainties in the posterior probabilities are subdominant to statistical fluctuations. We have obtained a speedup factor of 6.5 for Metropolis-Hastings and 3.5 for nested sampling. Finally, we discuss the general requirements for a credible error model and show how to update them on-the-fly

19. MANOVA, LDA, and FA criteria in clusters parameter estimation

Directory of Open Access Journals (Sweden)

Stan Lipovetsky

2015-12-01

Full Text Available Multivariate analysis of variance (MANOVA and linear discriminant analysis (LDA apply such well-known criteria as the Wilks’ lambda, Lawley–Hotelling trace, and Pillai’s trace test for checking quality of the solutions. The current paper suggests using these criteria for building objectives for finding clusters parameters because optimizing such objectives corresponds to the best distinguishing between the clusters. Relation to Joreskog’s classification for factor analysis (FA techniques is also considered. The problem can be reduced to the multinomial parameterization, and solution can be found in a nonlinear optimization procedure which yields the estimates for the cluster centers and sizes. This approach for clustering works with data compressed into covariance matrix so can be especially useful for big data.

20. A simulation of water pollution model parameter estimation

Science.gov (United States)

Kibler, J. F.

1976-01-01

A parameter estimation procedure for a water pollution transport model is elaborated. A two-dimensional instantaneous-release shear-diffusion model serves as representative of a simple transport process. Pollution concentration levels are arrived at via modeling of a remote-sensing system. The remote-sensed data are simulated by adding Gaussian noise to the concentration level values generated via the transport model. Model parameters are estimated from the simulated data using a least-squares batch processor. Resolution, sensor array size, and number and location of sensor readings can be found from the accuracies of the parameter estimates.

1. How to fool cosmic microwave background parameter estimation

International Nuclear Information System (INIS)

Kinney, William H.

2001-01-01

With the release of the data from the Boomerang and MAXIMA-1 balloon flights, estimates of cosmological parameters based on the cosmic microwave background (CMB) have reached unprecedented precision. In this paper I show that it is possible for these estimates to be substantially biased by features in the primordial density power spectrum. I construct primordial power spectra which mimic to within cosmic variance errors the effect of changing parameters such as the baryon density and neutrino mass, meaning that even an ideal measurement would be unable to resolve the degeneracy. Complementary measurements are necessary to resolve this ambiguity in parameter estimation efforts based on CMB temperature fluctuations alone

2. State Estimation-based Transmission line parameter identification

Directory of Open Access Journals (Sweden)

Fredy Andrés Olarte Dussán

2010-01-01

Full Text Available This article presents two state-estimation-based algorithms for identifying transmission line parameters. The identification technique used simultaneous state-parameter estimation on an artificial power system composed of several copies of the same transmission line, using measurements at different points in time. The first algorithm used active and reactive power measurements at both ends of the line. The second method used synchronised phasor voltage and current measurements at both ends. The algorithms were tested in simulated conditions on the 30-node IEEE test system. All line parameters for this system were estimated with errors below 1%.

3. Approximate effect of parameter pseudonoise intensity on rate of convergence for EKF parameter estimators. [Extended Kalman Filter

Science.gov (United States)

Hill, Bryon K.; Walker, Bruce K.

1991-01-01

When using parameter estimation methods based on extended Kalman filter (EKF) theory, it is common practice to assume that the unknown parameter values behave like a random process, such as a random walk, in order to guarantee their identifiability by the filter. The present work is the result of an ongoing effort to quantitatively describe the effect that the assumption of a fictitious noise (called pseudonoise) driving the unknown parameter values has on the parameter estimate convergence rate in filter-based parameter estimators. The initial approach is to examine a first-order system described by one state variable with one parameter to be estimated. The intent is to derive analytical results for this simple system that might offer insight into the effect of the pseudonoise assumption for more complex systems. Such results would make it possible to predict the estimator error convergence behavior as a function of the assumed pseudonoise intensity, and this leads to the natural application of the results to the design of filter-based parameter estimators. The results obtained show that the analytical description of the convergence behavior is very difficult.

4. Kinetic parameter estimation from attenuated SPECT projection measurements

International Nuclear Information System (INIS)

Reutter, B.W.; Gullberg, G.T.

1998-01-01

Conventional analysis of dynamically acquired nuclear medicine data involves fitting kinetic models to time-activity curves generated from regions of interest defined on a temporal sequence of reconstructed images. However, images reconstructed from the inconsistent projections of a time-varying distribution of radiopharmaceutical acquired by a rotating SPECT system can contain artifacts that lead to biases in the estimated kinetic parameters. To overcome this problem the authors investigated the estimation of kinetic parameters directly from projection data by modeling the data acquisition process. To accomplish this it was necessary to parametrize the spatial and temporal distribution of the radiopharmaceutical within the SPECT field of view. In a simulated transverse slice, kinetic parameters were estimated for simple one compartment models for three myocardial regions of interest, as well as for the liver. Myocardial uptake and washout parameters estimated by conventional analysis of noiseless simulated data had biases ranging between 1--63%. Parameters estimated directly from the noiseless projection data were unbiased as expected, since the model used for fitting was faithful to the simulation. Predicted uncertainties (standard deviations) of the parameters obtained for 500,000 detected events ranged between 2--31% for the myocardial uptake parameters and 2--23% for the myocardial washout parameters

5. Models for estimating photosynthesis parameters from in situ production profiles

Science.gov (United States)

Kovač, Žarko; Platt, Trevor; Sathyendranath, Shubha; Antunović, Suzana

2017-12-01

The rate of carbon assimilation in phytoplankton primary production models is mathematically prescribed with photosynthesis irradiance functions, which convert a light flux (energy) into a material flux (carbon). Information on this rate is contained in photosynthesis parameters: the initial slope and the assimilation number. The exactness of parameter values is crucial for precise calculation of primary production. Here we use a model of the daily production profile based on a suite of photosynthesis irradiance functions and extract photosynthesis parameters from in situ measured daily production profiles at the Hawaii Ocean Time-series station Aloha. For each function we recover parameter values, establish parameter distributions and quantify model skill. We observe that the choice of the photosynthesis irradiance function to estimate the photosynthesis parameters affects the magnitudes of parameter values as recovered from in situ profiles. We also tackle the problem of parameter exchange amongst the models and the effect it has on model performance. All models displayed little or no bias prior to parameter exchange, but significant bias following parameter exchange. The best model performance resulted from using optimal parameter values. Model formulation was extended further by accounting for spectral effects and deriving a spectral analytical solution for the daily production profile. The daily production profile was also formulated with time dependent growing biomass governed by a growth equation. The work on parameter recovery was further extended by exploring how to extract photosynthesis parameters from information on watercolumn production. It was demonstrated how to estimate parameter values based on a linearization of the full analytical solution for normalized watercolumn production and from the solution itself, without linearization. The paper complements previous works on photosynthesis irradiance models by analysing the skill and consistency of

6. AUTOMATIC ESTIMATION OF SIZE PARAMETERS USING VERIFIED COMPUTERIZED STEREOANALYSIS

Directory of Open Access Journals (Sweden)

Peter R Mouton

2011-05-01

Full Text Available State-of-the-art computerized stereology systems combine high-resolution video microscopy and hardwaresoftware integration with stereological methods to assist users in quantifying multidimensional parameters of importance to biomedical research, including volume, surface area, length, number, their variation and spatial distribution. The requirement for constant interactions between a trained, non-expert user and the targeted features of interest currently limits the throughput efficiency of these systems. To address this issue we developed a novel approach for automatic stereological analysis of 2-D images, Verified Computerized Stereoanalysis (VCS. The VCS approach minimizes the need for user interactions with high contrast [high signal-to-noise ratio (S:N] biological objects of interest. Performance testing of the VCS approach confirmed dramatic increases in the efficiency of total object volume (size estimation, without a loss of accuracy or precision compared to conventional computerized stereology. The broad application of high efficiency VCS to high-contrast biological objects on tissue sections could reduce labor costs, enhance hypothesis testing, and accelerate the progress of biomedical research focused on improvements in health and the management of disease.

7. Simultaneous Parameters Identifiability and Estimation of an E. coli Metabolic Network Model

Directory of Open Access Journals (Sweden)

Kese Pontes Freitas Alberton

2015-01-01

Full Text Available This work proposes a procedure for simultaneous parameters identifiability and estimation in metabolic networks in order to overcome difficulties associated with lack of experimental data and large number of parameters, a common scenario in the modeling of such systems. As case study, the complex real problem of parameters identifiability of the Escherichia coli K-12 W3110 dynamic model was investigated, composed by 18 differential ordinary equations and 35 kinetic rates, containing 125 parameters. With the procedure, model fit was improved for most of the measured metabolites, achieving 58 parameters estimated, including 5 unknown initial conditions. The results indicate that simultaneous parameters identifiability and estimation approach in metabolic networks is appealing, since model fit to the most of measured metabolites was possible even when important measures of intracellular metabolites and good initial estimates of parameters are not available.

8. REML estimates of genetic parameters of sexual dimorphism for ...

Full and half sibs were distinguished, in contrast to usual isofemale studies in which animals ... studies. Thus, the aim of this study was to estimate genetic parameters of sexual dimorphism in isofemale lines using ..... Muscovy ducks. Genet.

9. Robust nonlinear autoregressive moving average model parameter estimation using stochastic recurrent artificial neural networks

DEFF Research Database (Denmark)

Chon, K H; Hoyer, D; Armoundas, A A

1999-01-01

In this study, we introduce a new approach for estimating linear and nonlinear stochastic autoregressive moving average (ARMA) model parameters, given a corrupt signal, using artificial recurrent neural networks. This new approach is a two-step approach in which the parameters of the deterministic...... part of the stochastic ARMA model are first estimated via a three-layer artificial neural network (deterministic estimation step) and then reestimated using the prediction error as one of the inputs to the artificial neural networks in an iterative algorithm (stochastic estimation step). The prediction...... error is obtained by subtracting the corrupt signal of the estimated ARMA model obtained via the deterministic estimation step from the system output response. We present computer simulation examples to show the efficacy of the proposed stochastic recurrent neural network approach in obtaining accurate...

10. Joint state and parameter estimation for a class of cascade systems: Application to a hemodynamic model

KAUST Repository

2014-06-01

In this paper, we address a special case of state and parameter estimation, where the system can be put on a cascade form allowing to estimate the state components and the set of unknown parameters separately. Inspired by the nonlinear Balloon hemodynamic model for functional Magnetic Resonance Imaging problem, we propose a hierarchical approach. The system is divided into two subsystems in cascade. The state and input are first estimated from a noisy measured signal using an adaptive observer. The obtained input is then used to estimate the parameters of a linear system using the modulating functions method. Some numerical results are presented to illustrate the efficiency of the proposed method.

11. Kinetic parameter estimation from SPECT cone-beam projection measurements

International Nuclear Information System (INIS)

Huesman, Ronald H.; Reutter, Bryan W.; Zeng, G. Larry; Gullberg, Grant T.

1998-01-01

Kinetic parameters are commonly estimated from dynamically acquired nuclear medicine data by first reconstructing a dynamic sequence of images and subsequently fitting the parameters to time-activity curves generated from regions of interest overlaid upon the image sequence. Biased estimates can result from images reconstructed using inconsistent projections of a time-varying distribution of radiopharmaceutical acquired by a rotating SPECT system. If the SPECT data are acquired using cone-beam collimators wherein the gantry rotates so that the focal point of the collimators always remains in a plane, additional biases can arise from images reconstructed using insufficient, as well as truncated, projection samples. To overcome these problems we have investigated the estimation of kinetic parameters directly from SPECT cone-beam projection data by modelling the data acquisition process. To accomplish this it was necessary to parametrize the spatial and temporal distribution of the radiopharmaceutical within the SPECT field of view. In a simulated chest image volume, kinetic parameters were estimated for simple one-compartment models for four myocardial regions of interest. Myocardial uptake and washout parameters estimated by conventional analysis of noiseless simulated cone-beam data had biases ranging between 3-26% and 0-28%, respectively. Parameters estimated directly from the noiseless projection data were unbiased as expected, since the model used for fitting was faithful to the simulation. Statistical uncertainties of parameter estimates for 10 000 000 events ranged between 0.2-9% for the uptake parameters and between 0.3-6% for the washout parameters. (author)

12. Kalman filter data assimilation: targeting observations and parameter estimation.

Science.gov (United States)

Bellsky, Thomas; Kostelich, Eric J; Mahalov, Alex

2014-06-01

This paper studies the effect of targeted observations on state and parameter estimates determined with Kalman filter data assimilation (DA) techniques. We first provide an analytical result demonstrating that targeting observations within the Kalman filter for a linear model can significantly reduce state estimation error as opposed to fixed or randomly located observations. We next conduct observing system simulation experiments for a chaotic model of meteorological interest, where we demonstrate that the local ensemble transform Kalman filter (LETKF) with targeted observations based on largest ensemble variance is skillful in providing more accurate state estimates than the LETKF with randomly located observations. Additionally, we find that a hybrid ensemble Kalman filter parameter estimation method accurately updates model parameters within the targeted observation context to further improve state estimation.

13. Kalman filter data assimilation: Targeting observations and parameter estimation

International Nuclear Information System (INIS)

Bellsky, Thomas; Kostelich, Eric J.; Mahalov, Alex

2014-01-01

This paper studies the effect of targeted observations on state and parameter estimates determined with Kalman filter data assimilation (DA) techniques. We first provide an analytical result demonstrating that targeting observations within the Kalman filter for a linear model can significantly reduce state estimation error as opposed to fixed or randomly located observations. We next conduct observing system simulation experiments for a chaotic model of meteorological interest, where we demonstrate that the local ensemble transform Kalman filter (LETKF) with targeted observations based on largest ensemble variance is skillful in providing more accurate state estimates than the LETKF with randomly located observations. Additionally, we find that a hybrid ensemble Kalman filter parameter estimation method accurately updates model parameters within the targeted observation context to further improve state estimation

14. Kalman filter estimation of RLC parameters for UMP transmission line

Directory of Open Access Journals (Sweden)

Mohd Amin Siti Nur Aishah

2018-01-01

Full Text Available This paper present the development of Kalman filter that allows evaluation in the estimation of resistance (R, inductance (L, and capacitance (C values for Universiti Malaysia Pahang (UMP short transmission line. To overcome the weaknesses of existing system such as power losses in the transmission line, Kalman Filter can be a better solution to estimate the parameters. The aim of this paper is to estimate RLC values by using Kalman filter that in the end can increase the system efficiency in UMP. In this research, matlab simulink model is developed to analyse the UMP short transmission line by considering different noise conditions to reprint certain unknown parameters which are difficult to predict. The data is then used for comparison purposes between calculated and estimated values. The results have illustrated that the Kalman Filter estimate accurately the RLC parameters with less error. The comparison of accuracy between Kalman Filter and Least Square method is also presented to evaluate their performances.

15. State and parameter estimation in biotechnical batch reactors

NARCIS (Netherlands)

Keesman, K.J.

2000-01-01

In this paper the problem of state and parameter estimation in biotechnical batch reactors is considered. Models describing the biotechnical process behaviour are usually nonlinear with time-varying parameters. Hence, the resulting large dimensions of the augmented state vector, roughly > 7, in

16. Smoothing of, and parameter estimation from, noisy biophysical recordings.

Directory of Open Access Journals (Sweden)

Quentin J M Huys

2009-05-01

Full Text Available Biophysically detailed models of single cells are difficult to fit to real data. Recent advances in imaging techniques allow simultaneous access to various intracellular variables, and these data can be used to significantly facilitate the modelling task. These data, however, are noisy, and current approaches to building biophysically detailed models are not designed to deal with this. We extend previous techniques to take the noisy nature of the measurements into account. Sequential Monte Carlo ("particle filtering" methods, in combination with a detailed biophysical description of a cell, are used for principled, model-based smoothing of noisy recording data. We also provide an alternative formulation of smoothing where the neural nonlinearities are estimated in a non-parametric manner. Biophysically important parameters of detailed models (such as channel densities, intercompartmental conductances, input resistances, and observation noise are inferred automatically from noisy data via expectation-maximization. Overall, we find that model-based smoothing is a powerful, robust technique for smoothing of noisy biophysical data and for inference of biophysical parameters in the face of recording noise.

17. Bayesian parameter estimation for stochastic models of biological cell migration

Science.gov (United States)

Dieterich, Peter; Preuss, Roland

2013-08-01

Cell migration plays an essential role under many physiological and patho-physiological conditions. It is of major importance during embryonic development and wound healing. In contrast, it also generates negative effects during inflammation processes, the transmigration of tumors or the formation of metastases. Thus, a reliable quantification and characterization of cell paths could give insight into the dynamics of these processes. Typically stochastic models are applied where parameters are extracted by fitting models to the so-called mean square displacement of the observed cell group. We show that this approach has several disadvantages and problems. Therefore, we propose a simple procedure directly relying on the positions of the cell's trajectory and the covariance matrix of the positions. It is shown that the covariance is identical with the spatial aging correlation function for the supposed linear Gaussian models of Brownian motion with drift and fractional Brownian motion. The technique is applied and illustrated with simulated data showing a reliable parameter estimation from single cell paths.

18. On the Nature of SEM Estimates of ARMA Parameters.

Science.gov (United States)

Hamaker, Ellen L.; Dolan, Conor V.; Molenaar, Peter C. M.

2002-01-01

Reexamined the nature of structural equation modeling (SEM) estimates of autoregressive moving average (ARMA) models, replicated the simulation experiments of P. Molenaar, and examined the behavior of the log-likelihood ratio test. Simulation studies indicate that estimates of ARMA parameters observed with SEM software are identical to those…

19. Estimation of genetic parameters for body weights of Kurdish sheep ...

African Journals Online (AJOL)

Genetic parameters and (co)variance components were estimated by restricted maximum likelihood (REML) procedure, using animal models of kind 1, 2, 3, 4, 5 and 6, for body weight in birth, three, six, nine and 12 months of age in a Kurdish sheep flock. Direct and maternal breeding values were estimated using the best ...

20. A Note On the Estimation of the Poisson Parameter

Directory of Open Access Journals (Sweden)

S. S. Chitgopekar

1985-01-01

distribution when there are errors in observing the zeros and ones and obtains both the maximum likelihood and moments estimates of the Poisson mean and the error probabilities. It is interesting to note that either method fails to give unique estimates of these parameters unless the error probabilities are functionally related. However, it is equally interesting to observe that the estimate of the Poisson mean does not depend on the functional relationship between the error probabilities.

1. Parameter Estimation of Damped Compound Pendulum Using Bat Algorithm

Directory of Open Access Journals (Sweden)

2016-01-01

Full Text Available In this study, the parameter identification of the damped compound pendulum system is proposed using one of the most promising nature inspired algorithms which is Bat Algorithm (BA. The procedure used to achieve the parameter identification of the experimental system consists of input-output data collection, ARX model order selection and parameter estimation using bat algorithm (BA method. PRBS signal is used as an input signal to regulate the motor speed. Whereas, the output signal is taken from position sensor. Both, input and output data is used to estimate the parameter of the autoregressive with exogenous input (ARX model. The performance of the model is validated using mean squares error (MSE between the actual and predicted output responses of the models. Finally, comparative study is conducted between BA and the conventional estimation method (i.e. Least Square. Based on the results obtained, MSE produce from Bat Algorithm (BA is outperformed the Least Square (LS method.

2. Iterative methods for distributed parameter estimation in parabolic PDE

Energy Technology Data Exchange (ETDEWEB)

Vogel, C.R. [Montana State Univ., Bozeman, MT (United States); Wade, J.G. [Bowling Green State Univ., OH (United States)

1994-12-31

The goal of the work presented is the development of effective iterative techniques for large-scale inverse or parameter estimation problems. In this extended abstract, a detailed description of the mathematical framework in which the authors view these problem is presented, followed by an outline of the ideas and algorithms developed. Distributed parameter estimation problems often arise in mathematical modeling with partial differential equations. They can be viewed as inverse problems; the forward problem is that of using the fully specified model to predict the behavior of the system. The inverse or parameter estimation problem is: given the form of the model and some observed data from the system being modeled, determine the unknown parameters of the model. These problems are of great practical and mathematical interest, and the development of efficient computational algorithms is an active area of study.

3. Method for Estimating the Parameters of LFM Radar Signal

Directory of Open Access Journals (Sweden)

Tan Chuan-Zhang

2017-01-01

Full Text Available In order to obtain reliable estimate of parameters, it is very important to protect the integrality of linear frequency modulation (LFM signal. Therefore, in the practical LFM radar signal processing, the length of data frame is often greater than the pulse width (PW of signal. In this condition, estimating the parameters by fractional Fourier transform (FrFT will cause the signal to noise ratio (SNR decrease. Aiming at this problem, we multiply the data frame by a Gaussian window to improve the SNR. Besides, for a further improvement of parameters estimation precision, a novel algorithm is derived via Lagrange interpolation polynomial, and we enhance the algorithm by a logarithmic transformation. Simulation results demonstrate that the derived algorithm significantly reduces the estimation errors of chirp-rate and initial frequency.

4. Simple method for quick estimation of aquifer hydrogeological parameters

Science.gov (United States)

Ma, C.; Li, Y. Y.

2017-08-01

Development of simple and accurate methods to determine the aquifer hydrogeological parameters was of importance for groundwater resources assessment and management. Aiming at the present issue of estimating aquifer parameters based on some data of the unsteady pumping test, a fitting function of Theis well function was proposed using fitting optimization method and then a unitary linear regression equation was established. The aquifer parameters could be obtained by solving coefficients of the regression equation. The application of the proposed method was illustrated, using two published data sets. By the error statistics and analysis on the pumping drawdown, it showed that the method proposed in this paper yielded quick and accurate estimates of the aquifer parameters. The proposed method could reliably identify the aquifer parameters from long distance observed drawdowns and early drawdowns. It was hoped that the proposed method in this paper would be helpful for practicing hydrogeologists and hydrologists.

5. Parameter Estimation in Stochastic Grey-Box Models

DEFF Research Database (Denmark)

Kristensen, Niels Rode; Madsen, Henrik; Jørgensen, Sten Bay

2004-01-01

An efficient and flexible parameter estimation scheme for grey-box models in the sense of discretely, partially observed Ito stochastic differential equations with measurement noise is presented along with a corresponding software implementation. The estimation scheme is based on the extended...... Kalman filter and features maximum likelihood as well as maximum a posteriori estimation on multiple independent data sets, including irregularly sampled data sets and data sets with occasional outliers and missing observations. The software implementation is compared to an existing software tool...... and proves to have better performance both in terms of quality of estimates for nonlinear systems with significant diffusion and in terms of reproducibility. In particular, the new tool provides more accurate and more consistent estimates of the parameters of the diffusion term....

6. A new approach to estimate Angstrom coefficients

International Nuclear Information System (INIS)

Abdel Wahab, M.

1991-09-01

A simple quadratic equation to estimate global solar radiation with coefficients depending on some physical atmospheric parameters is presented. The importance of the second order and sensitivity to some climatic variations is discussed. (author). 8 refs, 4 figs, 2 tabs

7. Traveltime approximations and parameter estimation for orthorhombic media

KAUST Repository

Masmoudi, Nabil

2016-05-30

Building anisotropy models is necessary for seismic modeling and imaging. However, anisotropy estimation is challenging due to the trade-off between inhomogeneity and anisotropy. Luckily, we can estimate the anisotropy parameters Building anisotropy models is necessary for seismic modeling and imaging. However, anisotropy estimation is challenging due to the trade-off between inhomogeneity and anisotropy. Luckily, we can estimate the anisotropy parameters if we relate them analytically to traveltimes. Using perturbation theory, we have developed traveltime approximations for orthorhombic media as explicit functions of the anellipticity parameters η1, η2, and Δχ in inhomogeneous background media. The parameter Δχ is related to Tsvankin-Thomsen notation and ensures easier computation of traveltimes in the background model. Specifically, our expansion assumes an inhomogeneous ellipsoidal anisotropic background model, which can be obtained from well information and stacking velocity analysis. We have used the Shanks transform to enhance the accuracy of the formulas. A homogeneous medium simplification of the traveltime expansion provided a nonhyperbolic moveout description of the traveltime that was more accurate than other derived approximations. Moreover, the formulation provides a computationally efficient tool to solve the eikonal equation of an orthorhombic medium, without any constraints on the background model complexity. Although, the expansion is based on the factorized representation of the perturbation parameters, smooth variations of these parameters (represented as effective values) provides reasonable results. Thus, this formulation provides a mechanism to estimate the three effective parameters η1, η2, and Δχ. We have derived Dix-type formulas for orthorhombic medium to convert the effective parameters to their interval values.

8. Standard Errors of Estimated Latent Variable Scores with Estimated Structural Parameters

Science.gov (United States)

Hoshino, Takahiro; Shigemasu, Kazuo

2008-01-01

The authors propose a concise formula to evaluate the standard error of the estimated latent variable score when the true values of the structural parameters are not known and must be estimated. The formula can be applied to factor scores in factor analysis or ability parameters in item response theory, without bootstrap or Markov chain Monte…

9. Deep convolutional neural networks for estimating porous material parameters with ultrasound tomography

Science.gov (United States)

Lähivaara, Timo; Kärkkäinen, Leo; Huttunen, Janne M. J.; Hesthaven, Jan S.

2018-02-01

We study the feasibility of data based machine learning applied to ultrasound tomography to estimate water-saturated porous material parameters. In this work, the data to train the neural networks is simulated by solving wave propagation in coupled poroviscoelastic-viscoelastic-acoustic media. As the forward model, we consider a high-order discontinuous Galerkin method while deep convolutional neural networks are used to solve the parameter estimation problem. In the numerical experiment, we estimate the material porosity and tortuosity while the remaining parameters which are of less interest are successfully marginalized in the neural networks-based inversion. Computational examples confirms the feasibility and accuracy of this approach.

10. Small sample GEE estimation of regression parameters for longitudinal data.

Science.gov (United States)

Paul, Sudhir; Zhang, Xuemao

2014-09-28

Longitudinal (clustered) response data arise in many bio-statistical applications which, in general, cannot be assumed to be independent. Generalized estimating equation (GEE) is a widely used method to estimate marginal regression parameters for correlated responses. The advantage of the GEE is that the estimates of the regression parameters are asymptotically unbiased even if the correlation structure is misspecified, although their small sample properties are not known. In this paper, two bias adjusted GEE estimators of the regression parameters in longitudinal data are obtained when the number of subjects is small. One is based on a bias correction, and the other is based on a bias reduction. Simulations show that the performances of both the bias-corrected methods are similar in terms of bias, efficiency, coverage probability, average coverage length, impact of misspecification of correlation structure, and impact of cluster size on bias correction. Both these methods show superior properties over the GEE estimates for small samples. Further, analysis of data involving a small number of subjects also shows improvement in bias, MSE, standard error, and length of the confidence interval of the estimates by the two bias adjusted methods over the GEE estimates. For small to moderate sample sizes (N ≤50), either of the bias-corrected methods GEEBc and GEEBr can be used. However, the method GEEBc should be preferred over GEEBr, as the former is computationally easier. For large sample sizes, the GEE method can be used. Copyright © 2014 John Wiley & Sons, Ltd.

11. Pattern statistics on Markov chains and sensitivity to parameter estimation

Directory of Open Access Journals (Sweden)

Nuel Grégory

2006-10-01

Full Text Available Abstract Background: In order to compute pattern statistics in computational biology a Markov model is commonly used to take into account the sequence composition. Usually its parameter must be estimated. The aim of this paper is to determine how sensitive these statistics are to parameter estimation, and what are the consequences of this variability on pattern studies (finding the most over-represented words in a genome, the most significant common words to a set of sequences,.... Results: In the particular case where pattern statistics (overlap counting only computed through binomial approximations we use the delta-method to give an explicit expression of σ, the standard deviation of a pattern statistic. This result is validated using simulations and a simple pattern study is also considered. Conclusion: We establish that the use of high order Markov model could easily lead to major mistakes due to the high sensitivity of pattern statistics to parameter estimation.

12. Parameters Estimation of Geographically Weighted Ordinal Logistic Regression (GWOLR) Model

Science.gov (United States)

Zuhdi, Shaifudin; Retno Sari Saputro, Dewi; Widyaningsih, Purnami

2017-06-01

A regression model is the representation of relationship between independent variable and dependent variable. The dependent variable has categories used in the logistic regression model to calculate odds on. The logistic regression model for dependent variable has levels in the logistics regression model is ordinal. GWOLR model is an ordinal logistic regression model influenced the geographical location of the observation site. Parameters estimation in the model needed to determine the value of a population based on sample. The purpose of this research is to parameters estimation of GWOLR model using R software. Parameter estimation uses the data amount of dengue fever patients in Semarang City. Observation units used are 144 villages in Semarang City. The results of research get GWOLR model locally for each village and to know probability of number dengue fever patient categories.

13. Quasi-Newton methods for parameter estimation in functional differential equations

Science.gov (United States)

Brewer, Dennis W.

1988-01-01

A state-space approach to parameter estimation in linear functional differential equations is developed using the theory of linear evolution equations. A locally convergent quasi-Newton type algorithm is applied to distributed systems with particular emphasis on parameters that induce unbounded perturbations of the state. The algorithm is computationally implemented on several functional differential equations, including coefficient and delay estimation in linear delay-differential equations.

14. A robust methodology for kinetic model parameter estimation for biocatalytic reactions

DEFF Research Database (Denmark)

Al-Haque, Naweed; Andrade Santacoloma, Paloma de Gracia; Lima Afonso Neto, Watson

2012-01-01

lead to globally optimized parameter values. In this article, a robust methodology to estimate parameters for biocatalytic reaction kinetic expressions is proposed. The methodology determines the parameters in a systematic manner by exploiting the best features of several of the current approaches...... parameters, which are strongly correlated with each other. State-of-the-art methodologies such as nonlinear regression (using progress curves) or graphical analysis (using initial rate data, for example, the Lineweaver-Burke plot, Hanes plot or Dixon plot) often incorporate errors in the estimates and rarely...

15. A Practical Approach for Parameter Identification with Limited Information

DEFF Research Database (Denmark)

Zeni, Lorenzo; Yang, Guangya; Tarnowski, Germán Claudio

2014-01-01

A practical parameter estimation procedure for a real excitation system is reported in this paper. The core algorithm is based on genetic algorithm (GA) which estimates the parameters of a real AC brushless excitation system with limited information about the system. Practical considerations are ...... parameters. The whole methodology is described and the estimation strategy is presented in this paper....

16. Parameter Estimation of Damped Compound Pendulum Differential Evolution Algorithm

Directory of Open Access Journals (Sweden)

2016-01-01

Full Text Available This paper present the parameter identification of damped compound pendulum using differential evolution algorithm. The procedure used to achieve the parameter identification of the experimental system consisted of input output data collection, ARX model order selection and parameter estimation using conventional method least square (LS and differential evolution (DE algorithm. PRBS signal is used to be input signal to regulate the motor speed. Whereas, the output signal is taken from position sensor. Both, input and output data is used to estimate the parameter of the ARX model. The residual error between the actual and predicted output responses of the models is validated using mean squares error (MSE. Analysis showed that, MSE value for LS is 0.0026 and MSE value for DE is 3.6601×10-5. Based results obtained, it was found that DE have lower MSE than the LS method.

17. CTER—Rapid estimation of CTF parameters with error assessment

Energy Technology Data Exchange (ETDEWEB)

Penczek, Pawel A., E-mail: Pawel.A.Penczek@uth.tmc.edu [Department of Biochemistry and Molecular Biology, The University of Texas Medical School, 6431 Fannin MSB 6.220, Houston, TX 77054 (United States); Fang, Jia [Department of Biochemistry and Molecular Biology, The University of Texas Medical School, 6431 Fannin MSB 6.220, Houston, TX 77054 (United States); Li, Xueming; Cheng, Yifan [The Keck Advanced Microscopy Laboratory, Department of Biochemistry and Biophysics, University of California, San Francisco, CA 94158 (United States); Loerke, Justus; Spahn, Christian M.T. [Institut für Medizinische Physik und Biophysik, Charité – Universitätsmedizin Berlin, Charitéplatz 1, 10117 Berlin (Germany)

2014-05-01

In structural electron microscopy, the accurate estimation of the Contrast Transfer Function (CTF) parameters, particularly defocus and astigmatism, is of utmost importance for both initial evaluation of micrograph quality and for subsequent structure determination. Due to increases in the rate of data collection on modern microscopes equipped with new generation cameras, it is also important that the CTF estimation can be done rapidly and with minimal user intervention. Finally, in order to minimize the necessity for manual screening of the micrographs by a user it is necessary to provide an assessment of the errors of fitted parameters values. In this work we introduce CTER, a CTF parameters estimation method distinguished by its computational efficiency. The efficiency of the method makes it suitable for high-throughput EM data collection, and enables the use of a statistical resampling technique, bootstrap, that yields standard deviations of estimated defocus and astigmatism amplitude and angle, thus facilitating the automation of the process of screening out inferior micrograph data. Furthermore, CTER also outputs the spatial frequency limit imposed by reciprocal space aliasing of the discrete form of the CTF and the finite window size. We demonstrate the efficiency and accuracy of CTER using a data set collected on a 300 kV Tecnai Polara (FEI) using the K2 Summit DED camera in super-resolution counting mode. Using CTER we obtained a structure of the 80S ribosome whose large subunit had a resolution of 4.03 Å without, and 3.85 Å with, inclusion of astigmatism parameters. - Highlights: • We describe methodology for estimation of CTF parameters with error assessment. • Error estimates provide means for automated elimination of inferior micrographs. • High computational efficiency allows real-time monitoring of EM data quality. • Accurate CTF estimation yields structure of the 80S human ribosome at 3.85 Å.

18. Parameter estimation in stochastic rainfall-runoff models

DEFF Research Database (Denmark)

Jonsdottir, Harpa; Madsen, Henrik; Palsson, Olafur Petur

2006-01-01

A parameter estimation method for stochastic rainfall-runoff models is presented. The model considered in the paper is a conceptual stochastic model, formulated in continuous-discrete state space form. The model is small and a fully automatic optimization is, therefore, possible for estimating all...... the parameter values are optimal for simulation or prediction. The data originates from Iceland and the model is designed for Icelandic conditions, including a snow routine for mountainous areas. The model demands only two input data series, precipitation and temperature and one output data series...

19. Estimation of octanol/water partition coefficients using LSER parameters

Science.gov (United States)

Luehrs, Dean C.; Hickey, James P.; Godbole, Kalpana A.; Rogers, Tony N.

1998-01-01

The logarithms of octanol/water partition coefficients, logKow, were regressed against the linear solvation energy relationship (LSER) parameters for a training set of 981 diverse organic chemicals. The standard deviation for logKow was 0.49. The regression equation was then used to estimate logKow for a test of 146 chemicals which included pesticides and other diverse polyfunctional compounds. Thus the octanol/water partition coefficient may be estimated by LSER parameters without elaborate software but only moderate accuracy should be expected.

20. Application of genetic algorithms for parameter estimation in liquid chromatography

International Nuclear Information System (INIS)

Hernandez Torres, Reynier; Irizar Mesa, Mirtha; Tavares Camara, Leoncio Diogenes

2012-01-01

In chromatography, complex inverse problems related to the parameters estimation and process optimization are presented. Metaheuristics methods are known as general purpose approximated algorithms which seek and hopefully find good solutions at a reasonable computational cost. These methods are iterative process to perform a robust search of a solution space. Genetic algorithms are optimization techniques based on the principles of genetics and natural selection. They have demonstrated very good performance as global optimizers in many types of applications, including inverse problems. In this work, the effectiveness of genetic algorithms is investigated to estimate parameters in liquid chromatography

1. Estimation of G-renewal process parameters as an ill-posed inverse problem

International Nuclear Information System (INIS)

Krivtsov, V.; Yevkin, O.

2013-01-01

Statistical estimation of G-renewal process parameters is an important estimation problem, which has been considered by many authors. We view this problem from the standpoint of a mathematically ill-posed, inverse problem (the solution is not unique and/or is sensitive to statistical error) and propose a regularization approach specifically suited to the G-renewal process. Regardless of the estimation method, the respective objective function usually involves parameters of the underlying life-time distribution and simultaneously the restoration parameter. In this paper, we propose to regularize the problem by decoupling the estimation of the aforementioned parameters. Using a simulation study, we show that the resulting estimation/extrapolation accuracy of the proposed method is considerably higher than that of the existing methods

2. Bayesian estimation of parameters in a regional hydrological model

Directory of Open Access Journals (Sweden)

K. Engeland

2002-01-01

Full Text Available This study evaluates the applicability of the distributed, process-oriented Ecomag model for prediction of daily streamflow in ungauged basins. The Ecomag model is applied as a regional model to nine catchments in the NOPEX area, using Bayesian statistics to estimate the posterior distribution of the model parameters conditioned on the observed streamflow. The distribution is calculated by Markov Chain Monte Carlo (MCMC analysis. The Bayesian method requires formulation of a likelihood function for the parameters and three alternative formulations are used. The first is a subjectively chosen objective function that describes the goodness of fit between the simulated and observed streamflow, as defined in the GLUE framework. The second and third formulations are more statistically correct likelihood models that describe the simulation errors. The full statistical likelihood model describes the simulation errors as an AR(1 process, whereas the simple model excludes the auto-regressive part. The statistical parameters depend on the catchments and the hydrological processes and the statistical and the hydrological parameters are estimated simultaneously. The results show that the simple likelihood model gives the most robust parameter estimates. The simulation error may be explained to a large extent by the catchment characteristics and climatic conditions, so it is possible to transfer knowledge about them to ungauged catchments. The statistical models for the simulation errors indicate that structural errors in the model are more important than parameter uncertainties. Keywords: regional hydrological model, model uncertainty, Bayesian analysis, Markov Chain Monte Carlo analysis

3. Targeted estimation of nuisance parameters to obtain valid statistical inference.

Science.gov (United States)

van der Laan, Mark J

2014-01-01

In order to obtain concrete results, we focus on estimation of the treatment specific mean, controlling for all measured baseline covariates, based on observing independent and identically distributed copies of a random variable consisting of baseline covariates, a subsequently assigned binary treatment, and a final outcome. The statistical model only assumes possible restrictions on the conditional distribution of treatment, given the covariates, the so-called propensity score. Estimators of the treatment specific mean involve estimation of the propensity score and/or estimation of the conditional mean of the outcome, given the treatment and covariates. In order to make these estimators asymptotically unbiased at any data distribution in the statistical model, it is essential to use data-adaptive estimators of these nuisance parameters such as ensemble learning, and specifically super-learning. Because such estimators involve optimal trade-off of bias and variance w.r.t. the infinite dimensional nuisance parameter itself, they result in a sub-optimal bias/variance trade-off for the resulting real-valued estimator of the estimand. We demonstrate that additional targeting of the estimators of these nuisance parameters guarantees that this bias for the estimand is second order and thereby allows us to prove theorems that establish asymptotic linearity of the estimator of the treatment specific mean under regularity conditions. These insights result in novel targeted minimum loss-based estimators (TMLEs) that use ensemble learning with additional targeted bias reduction to construct estimators of the nuisance parameters. In particular, we construct collaborative TMLEs (C-TMLEs) with known influence curve allowing for statistical inference, even though these C-TMLEs involve variable selection for the propensity score based on a criterion that measures how effective the resulting fit of the propensity score is in removing bias for the estimand. As a particular special

4. SCoPE: an efficient method of Cosmological Parameter Estimation

International Nuclear Information System (INIS)

2014-01-01

Markov Chain Monte Carlo (MCMC) sampler is widely used for cosmological parameter estimation from CMB and other data. However, due to the intrinsic serial nature of the MCMC sampler, convergence is often very slow. Here we present a fast and independently written Monte Carlo method for cosmological parameter estimation named as Slick Cosmological Parameter Estimator (SCoPE), that employs delayed rejection to increase the acceptance rate of a chain, and pre-fetching that helps an individual chain to run on parallel CPUs. An inter-chain covariance update is also incorporated to prevent clustering of the chains allowing faster and better mixing of the chains. We use an adaptive method for covariance calculation to calculate and update the covariance automatically as the chains progress. Our analysis shows that the acceptance probability of each step in SCoPE is more than 95% and the convergence of the chains are faster. Using SCoPE, we carry out some cosmological parameter estimations with different cosmological models using WMAP-9 and Planck results. One of the current research interests in cosmology is quantifying the nature of dark energy. We analyze the cosmological parameters from two illustrative commonly used parameterisations of dark energy models. We also asses primordial helium fraction in the universe can be constrained by the present CMB data from WMAP-9 and Planck. The results from our MCMC analysis on the one hand helps us to understand the workability of the SCoPE better, on the other hand it provides a completely independent estimation of cosmological parameters from WMAP-9 and Planck data

5. Systematic Approach for Decommissioning Planning and Estimating

International Nuclear Information System (INIS)

Dam, A. S.

2002-01-01

Nuclear facility decommissioning, satisfactorily completed at the lowest cost, relies on a systematic approach to the planning, estimating, and documenting the work. High quality information is needed to properly perform the planning and estimating. A systematic approach to collecting and maintaining the needed information is recommended using a knowledgebase system for information management. A systematic approach is also recommended to develop the decommissioning plan, cost estimate and schedule. A probabilistic project cost and schedule risk analysis is included as part of the planning process. The entire effort is performed by a experienced team of decommissioning planners, cost estimators, schedulers, and facility knowledgeable owner representatives. The plant data, work plans, cost and schedule are entered into a knowledgebase. This systematic approach has been used successfully for decommissioning planning and cost estimating for a commercial nuclear power plant. Elements of this approach have been used for numerous cost estimates and estimate reviews. The plan and estimate in the knowledgebase should be a living document, updated periodically, to support decommissioning fund provisioning, with the plan ready for use when the need arises

6. Estimation of Compaction Parameters Based on Soil Classification

Science.gov (United States)

Lubis, A. S.; Muis, Z. A.; Hastuty, I. P.; Siregar, I. M.

2018-02-01

Factors that must be considered in compaction of the soil works were the type of soil material, field control, maintenance and availability of funds. Those problems then raised the idea of how to estimate the density of the soil with a proper implementation system, fast, and economical. This study aims to estimate the compaction parameter i.e. the maximum dry unit weight (γ dmax) and optimum water content (Wopt) based on soil classification. Each of 30 samples were being tested for its properties index and compaction test. All of the data’s from the laboratory test results, were used to estimate the compaction parameter values by using linear regression and Goswami Model. From the research result, the soil types were A4, A-6, and A-7 according to AASHTO and SC, SC-SM, and CL based on USCS. By linear regression, the equation for estimation of the maximum dry unit weight (γdmax *)=1,862-0,005*FINES- 0,003*LL and estimation of the optimum water content (wopt *)=- 0,607+0,362*FINES+0,161*LL. By Goswami Model (with equation Y=mLogG+k), for estimation of the maximum dry unit weight (γdmax *) with m=-0,376 and k=2,482, for estimation of the optimum water content (wopt *) with m=21,265 and k=-32,421. For both of these equations a 95% confidence interval was obtained.

7. An Adaptive Estimation of Forecast Error Covariance Parameters for Kalman Filtering Data Assimilation

Institute of Scientific and Technical Information of China (English)

Xiaogu ZHENG

2009-01-01

An adaptive estimation of forecast error covariance matrices is proposed for Kalman filtering data assimilation. A forecast error covariance matrix is initially estimated using an ensemble of perturbation forecasts. This initially estimated matrix is then adjusted with scale parameters that are adaptively estimated by minimizing -2log-likelihood of observed-minus-forecast residuals. The proposed approach could be applied to Kalman filtering data assimilation with imperfect models when the model error statistics are not known. A simple nonlinear model (Burgers' equation model) is used to demonstrate the efficacy of the proposed approach.

8. Low Complexity Parameter Estimation For Off-the-Grid Targets

KAUST Repository

Jardak, Seifallah

2015-10-05

In multiple-input multiple-output radar, to estimate the reflection coefficient, spatial location, and Doppler shift of a target, a derived cost function is usually evaluated and optimized over a grid of points. The performance of such algorithms is directly affected by the size of the grid: increasing the number of points will enhance the resolution of the algorithm but exponentially increase its complexity. In this work, to estimate the parameters of a target, a reduced complexity super resolution algorithm is proposed. For off-the-grid targets, it uses a low order two dimensional fast Fourier transform to determine a suboptimal solution and then an iterative algorithm to jointly estimate the spatial location and Doppler shift. Simulation results show that the mean square estimation error of the proposed estimators achieve the Cram\\'er-Rao lower bound. © 2015 IEEE.

9. Revised models and genetic parameter estimates for production and ...

African Journals Online (AJOL)

Genetic parameters for production and reproduction traits in the Elsenburg Dormer sheep stud were estimated using records of 11743 lambs born between 1943 and 2002. An animal model with direct and maternal additive, maternal permanent and temporary environmental effects was fitted for traits considered traits of the ...

10. Estimation of Physical Parameters in Linear and Nonlinear Dynamic Systems

DEFF Research Database (Denmark)

Knudsen, Morten

variance and confidence ellipsoid is demonstrated. The relation is based on a new theorem on maxima of an ellipsoid. The procedure for input signal design and physical parameter estimation is tested on a number of examples, linear as well as nonlinear and simulated as well as real processes, and it appears...

11. Synchronous Generator Model Parameter Estimation Based on Noisy Dynamic Waveforms

Science.gov (United States)

Berhausen, Sebastian; Paszek, Stefan

2016-01-01

In recent years, there have occurred system failures in many power systems all over the world. They have resulted in a lack of power supply to a large number of recipients. To minimize the risk of occurrence of power failures, it is necessary to perform multivariate investigations, including simulations, of power system operating conditions. To conduct reliable simulations, the current base of parameters of the models of generating units, containing the models of synchronous generators, is necessary. In the paper, there is presented a method for parameter estimation of a synchronous generator nonlinear model based on the analysis of selected transient waveforms caused by introducing a disturbance (in the form of a pseudorandom signal) in the generator voltage regulation channel. The parameter estimation was performed by minimizing the objective function defined as a mean square error for deviations between the measurement waveforms and the waveforms calculated based on the generator mathematical model. A hybrid algorithm was used for the minimization of the objective function. In the paper, there is described a filter system used for filtering the noisy measurement waveforms. The calculation results of the model of a 44 kW synchronous generator installed on a laboratory stand of the Institute of Electrical Engineering and Computer Science of the Silesian University of Technology are also given. The presented estimation method can be successfully applied to parameter estimation of different models of high-power synchronous generators operating in a power system.

12. MPEG2 video parameter and no reference PSNR estimation

DEFF Research Database (Denmark)

Li, Huiying; Forchhammer, Søren

2009-01-01

MPEG coded video may be processed for quality assessment or postprocessed to reduce coding artifacts or transcoded. Utilizing information about the MPEG stream may be useful for these tasks. This paper deals with estimating MPEG parameter information from the decoded video stream without access t...

13. Estimates Of Genetic Parameters Of Body Weights Of Different ...

African Journals Online (AJOL)

four (44) farrowings were used to estimate the genetic parameters (heritability and repeatability) of body weight of pigs. Results obtained from the study showed that the heritability (h2) of birth and weaning weights were moderate (0.33±0.16 ...

14. Estimation of stature from facial parameters in adult Abakaliki people ...

African Journals Online (AJOL)

This study is carried out in order to estimate the height of adult Igbo people of Abakaliki ethnic group in South-Eastern Nigeria from their facial Morphology. The parameters studied include Facial Length, Bizygomatic Diameter, Bigonial Diameter, Nasal Length, and Nasal Breadth. A total of 1000 subjects comprising 669 ...

15. Measuring, calculating and estimating PEP's parasitic mode loss parameters

International Nuclear Information System (INIS)

Weaver, J.N.

1981-01-01

This note discusses various ways the parasitic mode losses from a bunched beam to a vacuum chamber can be measured, calculated or estimated. A listing of the parameter, k, for the various PEP ring components is included. A number of formulas for calculating multiple and single pass losses are discussed and evaluated for several cases. 25 refs., 1 fig., 1 tab

16. Visco-piezo-elastic parameter estimation in laminated plate structures

DEFF Research Database (Denmark)

Araujo, A. L.; Mota Soares, C. M.; Herskovits, J.

2009-01-01

A parameter estimation technique is presented in this article, for identification of elastic, piezoelectric and viscoelastic properties of active laminated composite plates with surface-bonded piezoelectric patches. The inverse method presented uses experimental data in the form of a set of measu...

17. Estimates of genetic parameters and genetic gains for growth traits ...

African Journals Online (AJOL)

Estimates of genetic parameters and genetic gains for growth traits of two Eucalyptus ... In South Africa, Eucalyptus urophylla is an important species due to its ... as hybrid parents to cross with E. grandis was 59.8% over the population mean.

18. Estimation of riverbank soil erodibility parameters using genetic ...

Tapas Karmaker

2017-11-07

Nov 7, 2017 ... process. Therefore, this is a study to verify the applicability of inverse parameter ... successful modelling of the riverbank erosion, precise estimation of ..... For this simulation, about 40 iterations are found to attain the convergence. ..... rithm for function optimization: a Matlab implementation. NCSU-IE TR ...

19. estimation of shear strength parameters of lateritic soils using

African Journals Online (AJOL)

user

... a tool to estimate the. Nigerian Journal of Technology (NIJOTECH). Vol. ... modeling tools for the prediction of shear strength parameters for lateritic ... 2.2 Geotechnical Analysis of the Soils ... The back propagation learning algorithm is the most popular and ..... [10] Alsaleh, M. I., Numerical modeling for strain localization in ...

20. Estimation of genetic parameters for carcass traits in Japanese quail ...

African Journals Online (AJOL)

The aim of this study was to estimate genetic parameters of some carcass characteristics in the Japanese quail. For this aim, carcass weight (Cw), breast weight (Bw), leg weight (Lw), abdominal fat weight (AFw), carcass yield (CP), breast percentage (BP), leg percentage (LP) and abdominal fat percentage (AFP) were ...

1. Tsunami Prediction and Earthquake Parameters Estimation in the Red Sea

KAUST Repository

Sawlan, Zaid A

2012-12-01

Tsunami concerns have increased in the world after the 2004 Indian Ocean tsunami and the 2011 Tohoku tsunami. Consequently, tsunami models have been developed rapidly in the last few years. One of the advanced tsunami models is the GeoClaw tsunami model introduced by LeVeque (2011). This model is adaptive and consistent. Because of different sources of uncertainties in the model, observations are needed to improve model prediction through a data assimilation framework. Model inputs are earthquake parameters and topography. This thesis introduces a real-time tsunami forecasting method that combines tsunami model with observations using a hybrid ensemble Kalman filter and ensemble Kalman smoother. The filter is used for state prediction while the smoother operates smoothing to estimate the earthquake parameters. This method reduces the error produced by uncertain inputs. In addition, state-parameter EnKF is implemented to estimate earthquake parameters. Although number of observations is small, estimated parameters generates a better tsunami prediction than the model. Methods and results of prediction experiments in the Red Sea are presented and the prospect of developing an operational tsunami prediction system in the Red Sea is discussed.

2. Accuracy and sensitivity analysis on seismic anisotropy parameter estimation

Science.gov (United States)

Yan, Fuyong; Han, De-Hua

2018-04-01

There is significant uncertainty in measuring the Thomsen’s parameter δ in laboratory even though the dimensions and orientations of the rock samples are known. It is expected that more challenges will be encountered in the estimating of the seismic anisotropy parameters from field seismic data. Based on Monte Carlo simulation of vertical transversely isotropic layer cake model using the database of laboratory anisotropy measurement from the literature, we apply the commonly used quartic non-hyperbolic reflection moveout equation to estimate the seismic anisotropy parameters and test its accuracy and sensitivities to the source-receive offset, vertical interval velocity error and time picking error. The testing results show that the methodology works perfectly for noise-free synthetic data with short spread length. However, this method is extremely sensitive to the time picking error caused by mild random noises, and it requires the spread length to be greater than the depth of the reflection event. The uncertainties increase rapidly for the deeper layers and the estimated anisotropy parameters can be very unreliable for a layer with more than five overlain layers. It is possible that an isotropic formation can be misinterpreted as a strong anisotropic formation. The sensitivity analysis should provide useful guidance on how to group the reflection events and build a suitable geological model for anisotropy parameter inversion.

3. Estimation of Parameters in Mean-Reverting Stochastic Systems

Directory of Open Access Journals (Sweden)

Tianhai Tian

2014-01-01

Full Text Available Stochastic differential equation (SDE is a very important mathematical tool to describe complex systems in which noise plays an important role. SDE models have been widely used to study the dynamic properties of various nonlinear systems in biology, engineering, finance, and economics, as well as physical sciences. Since a SDE can generate unlimited numbers of trajectories, it is difficult to estimate model parameters based on experimental observations which may represent only one trajectory of the stochastic model. Although substantial research efforts have been made to develop effective methods, it is still a challenge to infer unknown parameters in SDE models from observations that may have large variations. Using an interest rate model as a test problem, in this work we use the Bayesian inference and Markov Chain Monte Carlo method to estimate unknown parameters in SDE models.

4. Estimating Arrhenius parameters using temperature programmed molecular dynamics

International Nuclear Information System (INIS)

Imandi, Venkataramana; Chatterjee, Abhijit

2016-01-01

Kinetic rates at different temperatures and the associated Arrhenius parameters, whenever Arrhenius law is obeyed, are efficiently estimated by applying maximum likelihood analysis to waiting times collected using the temperature programmed molecular dynamics method. When transitions involving many activated pathways are available in the dataset, their rates may be calculated using the same collection of waiting times. Arrhenius behaviour is ascertained by comparing rates at the sampled temperatures with ones from the Arrhenius expression. Three prototype systems with corrugated energy landscapes, namely, solvated alanine dipeptide, diffusion at the metal-solvent interphase, and lithium diffusion in silicon, are studied to highlight various aspects of the method. The method becomes particularly appealing when the Arrhenius parameters can be used to find rates at low temperatures where transitions are rare. Systematic coarse-graining of states can further extend the time scales accessible to the method. Good estimates for the rate parameters are obtained with 500-1000 waiting times.

5. Estimating Arrhenius parameters using temperature programmed molecular dynamics

Energy Technology Data Exchange (ETDEWEB)

Imandi, Venkataramana; Chatterjee, Abhijit, E-mail: abhijit@che.iitb.ac.in [Department of Chemical Engineering, Indian Institute of Technology Bombay, Mumbai 400076 (India)

2016-07-21

Kinetic rates at different temperatures and the associated Arrhenius parameters, whenever Arrhenius law is obeyed, are efficiently estimated by applying maximum likelihood analysis to waiting times collected using the temperature programmed molecular dynamics method. When transitions involving many activated pathways are available in the dataset, their rates may be calculated using the same collection of waiting times. Arrhenius behaviour is ascertained by comparing rates at the sampled temperatures with ones from the Arrhenius expression. Three prototype systems with corrugated energy landscapes, namely, solvated alanine dipeptide, diffusion at the metal-solvent interphase, and lithium diffusion in silicon, are studied to highlight various aspects of the method. The method becomes particularly appealing when the Arrhenius parameters can be used to find rates at low temperatures where transitions are rare. Systematic coarse-graining of states can further extend the time scales accessible to the method. Good estimates for the rate parameters are obtained with 500-1000 waiting times.

6. Using Genetic Algorithm to Estimate Hydraulic Parameters of Unconfined Aquifers

Directory of Open Access Journals (Sweden)

2009-03-01

Full Text Available Nowadays, optimization techniques such as Genetic Algorithms (GA have attracted wide attention among scientists for solving complicated engineering problems. In this article, pumping test data are used to assess the efficiency of GA in estimating unconfined aquifer parameters and a sensitivity analysis is carried out to propose an optimal arrangement of GA. For this purpose, hydraulic parameters of three sets of pumping test data are calculated by GA and they are compared with the results of graphical methods. The results indicate that the GA technique is an efficient, reliable, and powerful method for estimating the hydraulic parameters of unconfined aquifer and, further, that in cases of deficiency in pumping test data, it has a better performance than graphical methods.

7. Parameter estimation of Lorenz chaotic system using a hybrid swarm intelligence algorithm

International Nuclear Information System (INIS)

Lazzús, Juan A.; Rivera, Marco; López-Caraballo, Carlos H.

2016-01-01

A novel hybrid swarm intelligence algorithm for chaotic system parameter estimation is present. For this purpose, the parameters estimation on Lorenz systems is formulated as a multidimensional problem, and a hybrid approach based on particle swarm optimization with ant colony optimization (PSO–ACO) is implemented to solve this problem. Firstly, the performance of the proposed PSO–ACO algorithm is tested on a set of three representative benchmark functions, and the impact of the parameter settings on PSO–ACO efficiency is studied. Secondly, the parameter estimation is converted into an optimization problem on a three-dimensional Lorenz system. Numerical simulations on Lorenz model and comparisons with results obtained by other algorithms showed that PSO–ACO is a very powerful tool for parameter estimation with high accuracy and low deviations. - Highlights: • PSO–ACO combined particle swarm optimization with ant colony optimization. • This study is the first research of PSO–ACO to estimate parameters of chaotic systems. • PSO–ACO algorithm can identify the parameters of the three-dimensional Lorenz system with low deviations. • PSO–ACO is a very powerful tool for the parameter estimation on other chaotic system.

8. Parameter estimation of Lorenz chaotic system using a hybrid swarm intelligence algorithm

Energy Technology Data Exchange (ETDEWEB)

Lazzús, Juan A., E-mail: jlazzus@dfuls.cl; Rivera, Marco; López-Caraballo, Carlos H.

2016-03-11

A novel hybrid swarm intelligence algorithm for chaotic system parameter estimation is present. For this purpose, the parameters estimation on Lorenz systems is formulated as a multidimensional problem, and a hybrid approach based on particle swarm optimization with ant colony optimization (PSO–ACO) is implemented to solve this problem. Firstly, the performance of the proposed PSO–ACO algorithm is tested on a set of three representative benchmark functions, and the impact of the parameter settings on PSO–ACO efficiency is studied. Secondly, the parameter estimation is converted into an optimization problem on a three-dimensional Lorenz system. Numerical simulations on Lorenz model and comparisons with results obtained by other algorithms showed that PSO–ACO is a very powerful tool for parameter estimation with high accuracy and low deviations. - Highlights: • PSO–ACO combined particle swarm optimization with ant colony optimization. • This study is the first research of PSO–ACO to estimate parameters of chaotic systems. • PSO–ACO algorithm can identify the parameters of the three-dimensional Lorenz system with low deviations. • PSO–ACO is a very powerful tool for the parameter estimation on other chaotic system.

9. Summary of the DREAM8 Parameter Estimation Challenge: Toward Parameter Identification for Whole-Cell Models.

Directory of Open Access Journals (Sweden)

Jonathan R Karr

2015-05-01

Full Text Available Whole-cell models that explicitly represent all cellular components at the molecular level have the potential to predict phenotype from genotype. However, even for simple bacteria, whole-cell models will contain thousands of parameters, many of which are poorly characterized or unknown. New algorithms are needed to estimate these parameters and enable researchers to build increasingly comprehensive models. We organized the Dialogue for Reverse Engineering Assessments and Methods (DREAM 8 Whole-Cell Parameter Estimation Challenge to develop new parameter estimation algorithms for whole-cell models. We asked participants to identify a subset of parameters of a whole-cell model given the model's structure and in silico "experimental" data. Here we describe the challenge, the best performing methods, and new insights into the identifiability of whole-cell models. We also describe several valuable lessons we learned toward improving future challenges. Going forward, we believe that collaborative efforts supported by inexpensive cloud computing have the potential to solve whole-cell model parameter estimation.

10. A framework for scalable parameter estimation of gene circuit models using structural information

KAUST Repository

Kuwahara, Hiroyuki

2013-06-21

Motivation: Systematic and scalable parameter estimation is a key to construct complex gene regulatory models and to ultimately facilitate an integrative systems biology approach to quantitatively understand the molecular mechanisms underpinning gene regulation. Results: Here, we report a novel framework for efficient and scalable parameter estimation that focuses specifically on modeling of gene circuits. Exploiting the structure commonly found in gene circuit models, this framework decomposes a system of coupled rate equations into individual ones and efficiently integrates them separately to reconstruct the mean time evolution of the gene products. The accuracy of the parameter estimates is refined by iteratively increasing the accuracy of numerical integration using the model structure. As a case study, we applied our framework to four gene circuit models with complex dynamics based on three synthetic datasets and one time series microarray data set. We compared our framework to three state-of-the-art parameter estimation methods and found that our approach consistently generated higher quality parameter solutions efficiently. Although many general-purpose parameter estimation methods have been applied for modeling of gene circuits, our results suggest that the use of more tailored approaches to use domain-specific information may be a key to reverse engineering of complex biological systems. The Author 2013.

11. A framework for scalable parameter estimation of gene circuit models using structural information

KAUST Repository

Kuwahara, Hiroyuki; Fan, Ming; Wang, Suojin; Gao, Xin

2013-01-01

Motivation: Systematic and scalable parameter estimation is a key to construct complex gene regulatory models and to ultimately facilitate an integrative systems biology approach to quantitatively understand the molecular mechanisms underpinning gene regulation. Results: Here, we report a novel framework for efficient and scalable parameter estimation that focuses specifically on modeling of gene circuits. Exploiting the structure commonly found in gene circuit models, this framework decomposes a system of coupled rate equations into individual ones and efficiently integrates them separately to reconstruct the mean time evolution of the gene products. The accuracy of the parameter estimates is refined by iteratively increasing the accuracy of numerical integration using the model structure. As a case study, we applied our framework to four gene circuit models with complex dynamics based on three synthetic datasets and one time series microarray data set. We compared our framework to three state-of-the-art parameter estimation methods and found that our approach consistently generated higher quality parameter solutions efficiently. Although many general-purpose parameter estimation methods have been applied for modeling of gene circuits, our results suggest that the use of more tailored approaches to use domain-specific information may be a key to reverse engineering of complex biological systems. The Author 2013.

12. A framework for scalable parameter estimation of gene circuit models using structural information.

Science.gov (United States)

Kuwahara, Hiroyuki; Fan, Ming; Wang, Suojin; Gao, Xin

2013-07-01

Systematic and scalable parameter estimation is a key to construct complex gene regulatory models and to ultimately facilitate an integrative systems biology approach to quantitatively understand the molecular mechanisms underpinning gene regulation. Here, we report a novel framework for efficient and scalable parameter estimation that focuses specifically on modeling of gene circuits. Exploiting the structure commonly found in gene circuit models, this framework decomposes a system of coupled rate equations into individual ones and efficiently integrates them separately to reconstruct the mean time evolution of the gene products. The accuracy of the parameter estimates is refined by iteratively increasing the accuracy of numerical integration using the model structure. As a case study, we applied our framework to four gene circuit models with complex dynamics based on three synthetic datasets and one time series microarray data set. We compared our framework to three state-of-the-art parameter estimation methods and found that our approach consistently generated higher quality parameter solutions efficiently. Although many general-purpose parameter estimation methods have been applied for modeling of gene circuits, our results suggest that the use of more tailored approaches to use domain-specific information may be a key to reverse engineering of complex biological systems. http://sfb.kaust.edu.sa/Pages/Software.aspx. Supplementary data are available at Bioinformatics online.

13. Joint Multi-Fiber NODDI Parameter Estimation and Tractography using the Unscented Information Filter

Directory of Open Access Journals (Sweden)

Yogesh eRathi

2016-04-01

Full Text Available Tracing white matter fiber bundles is an integral part of analyzing brain connectivity. An accurate estimate of the underlying tissue parameters is also paramount in several neuroscience applications. In this work, we propose to use a joint fiber model estimation and tractography algorithm that uses the NODDI (neurite orientation dispersion diffusion imaging model to estimate fiber orientation dispersion consistently and smoothly along the fiber tracts along with estimating the intracellular and extracellular volume fractions from the diffusion signal. While the NODDI model has been used in earlier works to estimate the microstructural parameters at each voxel independently, for the first time, we propose to integrate it into a tractography framework. We extend this framework to estimate the NODDI parameters for two crossing fibers, which is imperative to trace fiber bundles through crossings as well as to estimate the microstructural parameters for each fiber bundle separately. We propose to use the unscented information filter (UIF to accurately estimate the model parameters and perform tractography. The proposed approach has significant computational performance improvements as well as numerical robustness over the unscented Kalman filter (UKF. Our method not only estimates the confidence in the estimated parameters via the covariance matrix, but also provides the Fisher-information matrix of the state variables (model parameters, which can be quite useful to measure model complexity. Results from in-vivo human brain data sets demonstrate the ability of our algorithm to trace through crossing fiber regions, while estimating orientation dispersion and other biophysical model parameters in a consistent manner along the tracts.

14. Estimating model parameters in nonautonomous chaotic systems using synchronization

International Nuclear Information System (INIS)

Yang, Xiaoli; Xu, Wei; Sun, Zhongkui

2007-01-01

In this Letter, a technique is addressed for estimating unknown model parameters of multivariate, in particular, nonautonomous chaotic systems from time series of state variables. This technique uses an adaptive strategy for tracking unknown parameters in addition to a linear feedback coupling for synchronizing systems, and then some general conditions, by means of the periodic version of the LaSalle invariance principle for differential equations, are analytically derived to ensure precise evaluation of unknown parameters and identical synchronization between the concerned experimental system and its corresponding receiver one. Exemplifies are presented by employing a parametrically excited 4D new oscillator and an additionally excited Ueda oscillator. The results of computer simulations reveal that the technique not only can quickly track the desired parameter values but also can rapidly respond to changes in operating parameters. In addition, the technique can be favorably robust against the effect of noise when the experimental system is corrupted by bounded disturbance and the normalized absolute error of parameter estimation grows almost linearly with the cutoff value of noise strength in simulation

15. Influence of measurement errors and estimated parameters on combustion diagnosis

International Nuclear Information System (INIS)

Payri, F.; Molina, S.; Martin, J.; Armas, O.

2006-01-01

Thermodynamic diagnosis models are valuable tools for the study of Diesel combustion. Inputs required by such models comprise measured mean and instantaneous variables, together with suitable values for adjustable parameters used in different submodels. In the case of measured variables, one may estimate the uncertainty associated with measurement errors; however, the influence of errors in model parameter estimation may not be so easily established on an experimental basis. In this paper, a simulated pressure cycle has been used along with known input parameters, so that any uncertainty in the inputs is avoided. Then, the influence of errors in measured variables and geometric and heat transmission parameters on the results of a diagnosis combustion model for direct injection diesel engines have been studied. This procedure allowed to establish the relative importance of these parameters and to set limits to the maximal errors of the model, accounting for both the maximal expected errors in the input parameters and the sensitivity of the model to those errors

16. Stable Parameter Estimation for Autoregressive Equations with Random Coefficients

Directory of Open Access Journals (Sweden)

V. B. Goryainov

2014-01-01

Full Text Available In recent yearsthere has been a growing interest in non-linear time series models. They are more flexible than traditional linear models and allow more adequate description of real data. Among these models a autoregressive model with random coefficients plays an important role. It is widely used in various fields of science and technology, for example, in physics, biology, economics and finance. The model parameters are the mean values of autoregressive coefficients. Their evaluation is the main task of model identification. The basic method of estimation is still the least squares method, which gives good results for Gaussian time series, but it is quite sensitive to even small disturbancesin the assumption of Gaussian observations. In this paper we propose estimates, which generalize the least squares estimate in the sense that the quadratic objective function is replaced by an arbitrary convex and even function. Reasonable choice of objective function allows you to keep the benefits of the least squares estimate and eliminate its shortcomings. In particular, you can make it so that they will be almost as effective as the least squares estimate in the Gaussian case, but almost never loose in accuracy with small deviations of the probability distribution of the observations from the Gaussian distribution.The main result is the proof of consistency and asymptotic normality of the proposed estimates in the particular case of the one-parameter model describing the stationary process with finite variance. Another important result is the finding of the asymptotic relative efficiency of the proposed estimates in relation to the least squares estimate. This allows you to compare the two estimates, depending on the probability distribution of innovation process and of autoregressive coefficients. The results can be used to identify an autoregressive process, especially with nonGaussian nature, and/or of autoregressive processes observed with gross

17. Parameter Estimation of a Closed Loop Coupled Tank Time Varying System using Recursive Methods

International Nuclear Information System (INIS)

Basir, Siti Nora; Yussof, Hanafiah; Shamsuddin, Syamimi; Selamat, Hazlina; Zahari, Nur Ismarrubie

2013-01-01

This project investigates the direct identification of closed loop plant using discrete-time approach. The uses of Recursive Least Squares (RLS), Recursive Instrumental Variable (RIV) and Recursive Instrumental Variable with Centre-Of-Triangle (RIV + COT) in the parameter estimation of closed loop time varying system have been considered. The algorithms were applied in a coupled tank system that employs covariance resetting technique where the time of parameter changes occur is unknown. The performances of all the parameter estimation methods, RLS, RIV and RIV + COT were compared. The estimation of the system whose output was corrupted with white and coloured noises were investigated. Covariance resetting technique successfully executed when the parameters change. RIV + COT gives better estimates than RLS and RIV in terms of convergence and maximum overshoot

18. Pedotransfer functions estimating soil hydraulic properties using different soil parameters

DEFF Research Database (Denmark)

Børgesen, Christen Duus; Iversen, Bo Vangsø; Jacobsen, Ole Hørbye

2008-01-01

Estimates of soil hydraulic properties using pedotransfer functions (PTF) are useful in many studies such as hydrochemical modelling and soil mapping. The objective of this study was to calibrate and test parametric PTFs that predict soil water retention and unsaturated hydraulic conductivity...... parameters. The PTFs are based on neural networks and the Bootstrap method using different sets of predictors and predict the van Genuchten/Mualem parameters. A Danish soil data set (152 horizons) dominated by sandy and sandy loamy soils was used in the development of PTFs to predict the Mualem hydraulic...... conductivity parameters. A larger data set (1618 horizons) with a broader textural range was used in the development of PTFs to predict the van Genuchten parameters. The PTFs using either three or seven textural classes combined with soil organic mater and bulk density gave the most reliable predictions...

19. Consistent Parameter and Transfer Function Estimation using Context Free Grammars

Science.gov (United States)

Klotz, Daniel; Herrnegger, Mathew; Schulz, Karsten

2017-04-01

This contribution presents a method for the inference of transfer functions for rainfall-runoff models. Here, transfer functions are defined as parametrized (functional) relationships between a set of spatial predictors (e.g. elevation, slope or soil texture) and model parameters. They are ultimately used for estimation of consistent, spatially distributed model parameters from a limited amount of lumped global parameters. Additionally, they provide a straightforward method for parameter extrapolation from one set of basins to another and can even be used to derive parameterizations for multi-scale models [see: Samaniego et al., 2010]. Yet, currently an actual knowledge of the transfer functions is often implicitly assumed. As a matter of fact, for most cases these hypothesized transfer functions can rarely be measured and often remain unknown. Therefore, this contribution presents a general method for the concurrent estimation of the structure of transfer functions and their respective (global) parameters. Note, that by consequence an estimation of the distributed parameters of the rainfall-runoff model is also undertaken. The method combines two steps to achieve this. The first generates different possible transfer functions. The second then estimates the respective global transfer function parameters. The structural estimation of the transfer functions is based on the context free grammar concept. Chomsky first introduced context free grammars in linguistics [Chomsky, 1956]. Since then, they have been widely applied in computer science. But, to the knowledge of the authors, they have so far not been used in hydrology. Therefore, the contribution gives an introduction to context free grammars and shows how they can be constructed and used for the structural inference of transfer functions. This is enabled by new methods from evolutionary computation, such as grammatical evolution [O'Neill, 2001], which make it possible to exploit the constructed grammar as a

20. Circuit realization, chaos synchronization and estimation of parameters of a hyperchaotic system with unknown parameters

Directory of Open Access Journals (Sweden)

A. Elsonbaty

2014-10-01

Full Text Available In this article, the adaptive chaos synchronization technique is implemented by an electronic circuit and applied to the hyperchaotic system proposed by Chen et al. We consider the more realistic and practical case where all the parameters of the master system are unknowns. We propose and implement an electronic circuit that performs the estimation of the unknown parameters and the updating of the parameters of the slave system automatically, and hence it achieves the synchronization. To the best of our knowledge, this is the first attempt to implement a circuit that estimates the values of the unknown parameters of chaotic system and achieves synchronization. The proposed circuit has a variety of suitable real applications related to chaos encryption and cryptography. The outputs of the implemented circuits and numerical simulation results are shown to view the performance of the synchronized system and the proposed circuit.

1. Parameter estimation in nonlinear models for pesticide degradation

International Nuclear Information System (INIS)

Richter, O.; Pestemer, W.; Bunte, D.; Diekkrueger, B.

1991-01-01

A wide class of environmental transfer models is formulated as ordinary or partial differential equations. With the availability of fast computers, the numerical solution of large systems became feasible. The main difficulty in performing a realistic and convincing simulation of the fate of a substance in the biosphere is not the implementation of numerical techniques but rather the incomplete data basis for parameter estimation. Parameter estimation is a synonym for statistical and numerical procedures to derive reasonable numerical values for model parameters from data. The classical method is the familiar linear regression technique which dates back to the 18th century. Because it is easy to handle, linear regression has long been established as a convenient tool for analysing relationships. However, the wide use of linear regression has led to an overemphasis of linear relationships. In nature, most relationships are nonlinear and linearization often gives a poor approximation of reality. Furthermore, pure regression models are not capable to map the dynamics of a process. Therefore, realistic models involve the evolution in time (and space). This leads in a natural way to the formulation of differential equations. To establish the link between data and dynamical models, numerical advanced parameter identification methods have been developed in recent years. This paper demonstrates the application of these techniques to estimation problems in the field of pesticide dynamics. (7 refs., 5 figs., 2 tabs.)

2. Estimation of common cause failure parameters with periodic tests

Energy Technology Data Exchange (ETDEWEB)

Barros, Anne [Institut Charles Delaunay - Universite de technologie de Troyes - FRE CNRS 2848, 12, rue Marie Curie - BP 2060 -10010 Troyes cedex (France)], E-mail: anne.barros@utt.fr; Grall, Antoine [Institut Charles Delaunay - Universite de technologie de Troyes - FRE CNRS 2848, 12, rue Marie Curie - BP 2060 -10010 Troyes cedex (France); Vasseur, Dominique [Electricite de France, EDF R and D - Industrial Risk Management Department 1, av. du General de Gaulle- 92141 Clamart (France)

2009-04-15

In the specific case of safety systems, CCF parameters estimators for standby components depend on the periodic test schemes. Classically, the testing schemes are either staggered (alternation of tests on redundant components) or non-staggered (all components are tested at the same time). In reality, periodic tests schemes performed on safety components are more complex and combine staggered tests, when the plant is in operation, to non-staggered tests during maintenance and refueling outage periods of the installation. Moreover, the CCF parameters estimators described in the US literature are derived in a consistent way with US Technical Specifications constraints that do not apply on the French Nuclear Power Plants for staggered tests on standby components. Given these issues, the evaluation of CCF parameters from the operating feedback data available within EDF implies the development of methodologies that integrate the testing schemes specificities. This paper aims to formally propose a solution for the estimation of CCF parameters given two distinct difficulties respectively related to a mixed testing scheme and to the consistency with EDF's specific practices inducing systematic non-simultaneity of the observed failures in a staggered testing scheme.

3. Nonlinear Parameter Estimation in Microbiological Degradation Systems and Statistic Test for Common Estimation

DEFF Research Database (Denmark)

Sommer, Helle Mølgaard; Holst, Helle; Spliid, Henrik

1995-01-01

Three identical microbiological experiments were carried out and analysed in order to examine the variability of the parameter estimates. The microbiological system consisted of a substrate (toluene) and a biomass (pure culture) mixed together in an aquifer medium. The degradation of the substrate...... and the growth of the biomass are described by the Monod model consisting of two nonlinear coupled first-order differential equations. The objective of this study was to estimate the kinetic parameters in the Monod model and to test whether the parameters from the three identical experiments have the same values....... Estimation of the parameters was obtained using an iterative maximum likelihood method and the test used was an approximative likelihood ratio test. The test showed that the three sets of parameters were identical only on a 4% alpha level....

4. PWR system simulation and parameter estimation with neural networks

International Nuclear Information System (INIS)

Akkurt, Hatice; Colak, Uener

2002-01-01

A detailed nonlinear model for a typical PWR system has been considered for the development of simulation software. Each component in the system has been represented by appropriate differential equations. The SCILAB software was used for solving nonlinear equations to simulate steady-state and transient operational conditions. Overall system has been constructed by connecting individual components to each other. The validity of models for individual components and overall system has been verified. The system response against given transients have been analyzed. A neural network has been utilized to estimate system parameters during transients. Different transients have been imposed in training and prediction stages with neural networks. Reactor power and system reactivity during the transient event have been predicted by the neural network. Results show that neural networks estimations are in good agreement with the calculated response of the reactor system. The maximum errors are within ±0.254% for power and between -0.146 and 0.353% for reactivity prediction cases. Steam generator parameters, pressure and water level, are also successfully predicted by the neural network employed in this study. The noise imposed on the input parameters of the neural network deteriorates the power estimation capability whereas the reactivity estimation capability is not significantly affected

5. PWR system simulation and parameter estimation with neural networks

Energy Technology Data Exchange (ETDEWEB)

Akkurt, Hatice; Colak, Uener E-mail: uc@nuke.hacettepe.edu.tr

2002-11-01

A detailed nonlinear model for a typical PWR system has been considered for the development of simulation software. Each component in the system has been represented by appropriate differential equations. The SCILAB software was used for solving nonlinear equations to simulate steady-state and transient operational conditions. Overall system has been constructed by connecting individual components to each other. The validity of models for individual components and overall system has been verified. The system response against given transients have been analyzed. A neural network has been utilized to estimate system parameters during transients. Different transients have been imposed in training and prediction stages with neural networks. Reactor power and system reactivity during the transient event have been predicted by the neural network. Results show that neural networks estimations are in good agreement with the calculated response of the reactor system. The maximum errors are within {+-}0.254% for power and between -0.146 and 0.353% for reactivity prediction cases. Steam generator parameters, pressure and water level, are also successfully predicted by the neural network employed in this study. The noise imposed on the input parameters of the neural network deteriorates the power estimation capability whereas the reactivity estimation capability is not significantly affected.

6. Tracking of nuclear reactor parameters via recursive non linear estimation

International Nuclear Information System (INIS)

Pages Fita, J.; Alengrin, G.; Aguilar Martin, J.; Zwingelstein, M.

1975-01-01

The usefulness of nonlinear estimation in the supervision of nuclear reactors, as well for reactivity determination as for on-line modelisation in order to detect eventual and unwanted changes in working operation is illustrated. It is dealt with the reactivity estimation using an a priori dynamical model under the hypothesis of one group of delayed neutrons (measurements were done with an ionisation chamber). The determination of the reactivity using such measurements appears as a nonlinear estimation procedure derived from a particular form of nonlinear filter. Observed inputs being demand of power and inside temperature, and output being the reactivity balance, a recursive algorithm is derived for the estimation of the parameters that define the actual behavior of the reactor. Example of treatment of real data is given [fr

7. The performance of simulated annealing in parameter estimation for vapor-liquid equilibrium modeling

Directory of Open Access Journals (Sweden)

A. Bonilla-Petriciolet

2007-03-01

Full Text Available In this paper we report the application and evaluation of the simulated annealing (SA optimization method in parameter estimation for vapor-liquid equilibrium (VLE modeling. We tested this optimization method using the classical least squares and error-in-variable approaches. The reliability and efficiency of the data-fitting procedure are also considered using different values for algorithm parameters of the SA method. Our results indicate that this method, when properly implemented, is a robust procedure for nonlinear parameter estimation in thermodynamic models. However, in difficult problems it still can converge to local optimums of the objective function.

8. Parameter Estimation as a Problem in Statistical Thermodynamics.

Science.gov (United States)

Earle, Keith A; Schneider, David J

2011-03-14

In this work, we explore the connections between parameter fitting and statistical thermodynamics using the maxent principle of Jaynes as a starting point. In particular, we show how signal averaging may be described by a suitable one particle partition function, modified for the case of a variable number of particles. These modifications lead to an entropy that is extensive in the number of measurements in the average. Systematic error may be interpreted as a departure from ideal gas behavior. In addition, we show how to combine measurements from different experiments in an unbiased way in order to maximize the entropy of simultaneous parameter fitting. We suggest that fit parameters may be interpreted as generalized coordinates and the forces conjugate to them may be derived from the system partition function. From this perspective, the parameter fitting problem may be interpreted as a process where the system (spectrum) does work against internal stresses (non-optimum model parameters) to achieve a state of minimum free energy/maximum entropy. Finally, we show how the distribution function allows us to define a geometry on parameter space, building on previous work[1, 2]. This geometry has implications for error estimation and we outline a program for incorporating these geometrical insights into an automated parameter fitting algorithm.

9. Genetic Parameter Estimates for Metabolizing Two Common Pharmaceuticals in Swine

Directory of Open Access Journals (Sweden)

Jeremy T. Howard

2018-02-01

Full Text Available In livestock, the regulation of drugs used to treat livestock has received increased attention and it is currently unknown how much of the phenotypic variation in drug metabolism is due to the genetics of an animal. Therefore, the objective of the study was to determine the amount of phenotypic variation in fenbendazole and flunixin meglumine drug metabolism due to genetics. The population consisted of crossbred female and castrated male nursery pigs (n = 198 that were sired by boars represented by four breeds. The animals were spread across nine batches. Drugs were administered intravenously and blood collected a minimum of 10 times over a 48 h period. Genetic parameters for the parent drug and metabolite concentration within each drug were estimated based on pharmacokinetics (PK parameters or concentrations across time utilizing a random regression model. The PK parameters were estimated using a non-compartmental analysis. The PK model included fixed effects of sex and breed of sire along with random sire and batch effects. The random regression model utilized Legendre polynomials and included a fixed population concentration curve, sex, and breed of sire effects along with a random sire deviation from the population curve and batch effect. The sire effect included the intercept for all models except for the fenbendazole metabolite (i.e., intercept and slope. The mean heritability across PK parameters for the fenbendazole and flunixin meglumine parent drug (metabolite was 0.15 (0.18 and 0.31 (0.40, respectively. For the parent drug (metabolite, the mean heritability across time was 0.27 (0.60 and 0.14 (0.44 for fenbendazole and flunixin meglumine, respectively. The errors surrounding the heritability estimates for the random regression model were smaller compared to estimates obtained from PK parameters. Across both the PK and plasma drug concentration across model, a moderate heritability was estimated. The model that utilized the plasma drug

10. Genetic Parameter Estimates for Metabolizing Two Common Pharmaceuticals in Swine

Science.gov (United States)

Howard, Jeremy T.; Ashwell, Melissa S.; Baynes, Ronald E.; Brooks, James D.; Yeatts, James L.; Maltecca, Christian

2018-01-01

In livestock, the regulation of drugs used to treat livestock has received increased attention and it is currently unknown how much of the phenotypic variation in drug metabolism is due to the genetics of an animal. Therefore, the objective of the study was to determine the amount of phenotypic variation in fenbendazole and flunixin meglumine drug metabolism due to genetics. The population consisted of crossbred female and castrated male nursery pigs (n = 198) that were sired by boars represented by four breeds. The animals were spread across nine batches. Drugs were administered intravenously and blood collected a minimum of 10 times over a 48 h period. Genetic parameters for the parent drug and metabolite concentration within each drug were estimated based on pharmacokinetics (PK) parameters or concentrations across time utilizing a random regression model. The PK parameters were estimated using a non-compartmental analysis. The PK model included fixed effects of sex and breed of sire along with random sire and batch effects. The random regression model utilized Legendre polynomials and included a fixed population concentration curve, sex, and breed of sire effects along with a random sire deviation from the population curve and batch effect. The sire effect included the intercept for all models except for the fenbendazole metabolite (i.e., intercept and slope). The mean heritability across PK parameters for the fenbendazole and flunixin meglumine parent drug (metabolite) was 0.15 (0.18) and 0.31 (0.40), respectively. For the parent drug (metabolite), the mean heritability across time was 0.27 (0.60) and 0.14 (0.44) for fenbendazole and flunixin meglumine, respectively. The errors surrounding the heritability estimates for the random regression model were smaller compared to estimates obtained from PK parameters. Across both the PK and plasma drug concentration across model, a moderate heritability was estimated. The model that utilized the plasma drug

11. Application of Bayesian approach to estimate average level spacing

International Nuclear Information System (INIS)

Huang Zhongfu; Zhao Zhixiang

1991-01-01

A method to estimate average level spacing from a set of resolved resonance parameters by using Bayesian approach is given. Using the information given in the distributions of both levels spacing and neutron width, the level missing in measured sample can be corrected more precisely so that better estimate for average level spacing can be obtained by this method. The calculation of s-wave resonance has been done and comparison with other work was carried out

12. An Approach to Quality Estimation in Model-Based Development

DEFF Research Database (Denmark)

Holmegaard, Jens Peter; Koch, Peter; Ravn, Anders Peter

2004-01-01

We present an approach to estimation of parameters for design space exploration in Model-Based Development, where synthesis of a system is done in two stages. Component qualities like space, execution time or power consumption are defined in a repository by platform dependent values. Connectors...

13. Bayesian Parameter Estimation via Filtering and Functional Approximations

KAUST Repository

Matthies, Hermann G.

2016-11-25

The inverse problem of determining parameters in a model by comparing some output of the model with observations is addressed. This is a description for what hat to be done to use the Gauss-Markov-Kalman filter for the Bayesian estimation and updating of parameters in a computational model. This is a filter acting on random variables, and while its Monte Carlo variant --- the Ensemble Kalman Filter (EnKF) --- is fairly straightforward, we subsequently only sketch its implementation with the help of functional representations.

14. Bayesian Parameter Estimation via Filtering and Functional Approximations

KAUST Repository

Matthies, Hermann G.; Litvinenko, Alexander; Rosic, Bojana V.; Zander, Elmar

2016-01-01

The inverse problem of determining parameters in a model by comparing some output of the model with observations is addressed. This is a description for what hat to be done to use the Gauss-Markov-Kalman filter for the Bayesian estimation and updating of parameters in a computational model. This is a filter acting on random variables, and while its Monte Carlo variant --- the Ensemble Kalman Filter (EnKF) --- is fairly straightforward, we subsequently only sketch its implementation with the help of functional representations.

15. Estimation of Medium Voltage Cable Parameters for PD Detection

DEFF Research Database (Denmark)

Villefrance, Rasmus; Holbøll, Joachim T.; Henriksen, Mogens

1998-01-01

Medium voltage cable characteristics have been determined with respect to the parameters having influence on the evaluation of results from PD-measurements on paper/oil and XLPE-cables. In particular, parameters essential for discharge quantification and location were measured. In order to relate...... and phase constants. A method to estimate this propagation constant, based on high frequency measurements, will be presented and will be applied to different cable types under different conditions. The influence of temperature and test voltage was investigated. The relevance of the results for cable...

16. Estimation of economic parameters of U.S. hydropower resources

Energy Technology Data Exchange (ETDEWEB)

Hall, Douglas G. [Idaho National Lab. (INL), Idaho Falls, ID (United States). Idaho National Engineering and Environmental Lab. (INEEL); Hunt, Richard T. [Idaho National Lab. (INL), Idaho Falls, ID (United States). Idaho National Engineering and Environmental Lab. (INEEL); Reeves, Kelly S. [Idaho National Lab. (INL), Idaho Falls, ID (United States). Idaho National Engineering and Environmental Lab. (INEEL); Carroll, Greg R. [Idaho National Lab. (INL), Idaho Falls, ID (United States). Idaho National Engineering and Environmental Lab. (INEEL)

2003-06-01

Tools for estimating the cost of developing and operating and maintaining hydropower resources in the form of regression curves were developed based on historical plant data. Development costs that were addressed included: licensing, construction, and five types of environmental mitigation. It was found that the data for each type of cost correlated well with plant capacity. A tool for estimating the annual and monthly electric generation of hydropower resources was also developed. Additional tools were developed to estimate the cost of upgrading a turbine or a generator. The development and operation and maintenance cost estimating tools, and the generation estimating tool were applied to 2,155 U.S. hydropower sites representing a total potential capacity of 43,036 MW. The sites included totally undeveloped sites, dams without a hydroelectric plant, and hydroelectric plants that could be expanded to achieve greater capacity. Site characteristics and estimated costs and generation for each site were assembled in a database in Excel format that is also included within the EERE Library under the title, “Estimation of Economic Parameters of U.S. Hydropower Resources - INL Hydropower Resource Economics Database.”

17. A Consistent Methodology Based Parameter Estimation for a Lactic Acid Bacteria Fermentation Model

DEFF Research Database (Denmark)

Spann, Robert; Roca, Christophe; Kold, David

2017-01-01

Lactic acid bacteria are used in many industrial applications, e.g. as starter cultures in the dairy industry or as probiotics, and research on their cell production is highly required. A first principles kinetic model was developed to describe and understand the biological, physical, and chemical...... mechanisms in a lactic acid bacteria fermentation. We present here a consistent approach for a methodology based parameter estimation for a lactic acid fermentation. In the beginning, just an initial knowledge based guess of parameters was available and an initial parameter estimation of the complete set...... of parameters was performed in order to get a good model fit to the data. However, not all parameters are identifiable with the given data set and model structure. Sensitivity, identifiability, and uncertainty analysis were completed and a relevant identifiable subset of parameters was determined for a new...

18. A Bayesian consistent dual ensemble Kalman filter for state-parameter estimation in subsurface hydrology

KAUST Repository

Ait-El-Fquih, Boujemaa; El Gharamti, Mohamad; Hoteit, Ibrahim

2016-01-01

Ensemble Kalman filtering (EnKF) is an efficient approach to addressing uncertainties in subsurface ground-water models. The EnKF sequentially integrates field data into simulation models to obtain a better characterization of the model's state and parameters. These are generally estimated following joint and dual filtering strategies, in which, at each assimilation cycle, a forecast step by the model is followed by an update step with incoming observations. The joint EnKF directly updates the augmented state-parameter vector, whereas the dual EnKF empirically employs two separate filters, first estimating the parameters and then estimating the state based on the updated parameters. To develop a Bayesian consistent dual approach and improve the state-parameter estimates and their consistency, we propose in this paper a one-step-ahead (OSA) smoothing formulation of the state-parameter Bayesian filtering problem from which we derive a new dual-type EnKF, the dual EnKF(OSA). Compared with the standard dual EnKF, it imposes a new update step to the state, which is shown to enhance the performance of the dual approach with almost no increase in the computational cost. Numerical experiments are conducted with a two-dimensional (2-D) synthetic groundwater aquifer model to investigate the performance and robustness of the proposed dual EnKFOSA, and to evaluate its results against those of the joint and dual EnKFs. The proposed scheme is able to successfully recover both the hydraulic head and the aquifer conductivity, providing further reliable estimates of their uncertainties. Furthermore, it is found to be more robust to different assimilation settings, such as the spatial and temporal distribution of the observations, and the level of noise in the data. Based on our experimental setups, it yields up to 25% more accurate state and parameter estimations than the joint and dual approaches.

19. A Bayesian consistent dual ensemble Kalman filter for state-parameter estimation in subsurface hydrology

KAUST Repository

Ait-El-Fquih, Boujemaa

2016-08-12

Ensemble Kalman filtering (EnKF) is an efficient approach to addressing uncertainties in subsurface ground-water models. The EnKF sequentially integrates field data into simulation models to obtain a better characterization of the model\\'s state and parameters. These are generally estimated following joint and dual filtering strategies, in which, at each assimilation cycle, a forecast step by the model is followed by an update step with incoming observations. The joint EnKF directly updates the augmented state-parameter vector, whereas the dual EnKF empirically employs two separate filters, first estimating the parameters and then estimating the state based on the updated parameters. To develop a Bayesian consistent dual approach and improve the state-parameter estimates and their consistency, we propose in this paper a one-step-ahead (OSA) smoothing formulation of the state-parameter Bayesian filtering problem from which we derive a new dual-type EnKF, the dual EnKF(OSA). Compared with the standard dual EnKF, it imposes a new update step to the state, which is shown to enhance the performance of the dual approach with almost no increase in the computational cost. Numerical experiments are conducted with a two-dimensional (2-D) synthetic groundwater aquifer model to investigate the performance and robustness of the proposed dual EnKFOSA, and to evaluate its results against those of the joint and dual EnKFs. The proposed scheme is able to successfully recover both the hydraulic head and the aquifer conductivity, providing further reliable estimates of their uncertainties. Furthermore, it is found to be more robust to different assimilation settings, such as the spatial and temporal distribution of the observations, and the level of noise in the data. Based on our experimental setups, it yields up to 25% more accurate state and parameter estimations than the joint and dual approaches.

20. Probabilistic estimation of the constitutive parameters of polymers

Directory of Open Access Journals (Sweden)

Siviour C.R.

2012-08-01

Full Text Available The Mulliken-Boyce constitutive model predicts the dynamic response of crystalline polymers as a function of strain rate and temperature. This paper describes the Mulliken-Boyce model-based estimation of the constitutive parameters in a Bayesian probabilistic framework. Experimental data from dynamic mechanical analysis and dynamic compression of PVC samples over a wide range of strain rates are analyzed. Both experimental uncertainty and natural variations in the material properties are simultaneously considered as independent and joint distributions; the posterior probability distributions are shown and compared with prior estimates of the material constitutive parameters. Additionally, particular statistical distributions are shown to be effective at capturing the rate and temperature dependence of internal phase transitions in DMA data.

1. Propagation channel characterization, parameter estimation, and modeling for wireless communications

CERN Document Server

Yin, Xuefeng

2016-01-01

Thoroughly covering channel characteristics and parameters, this book provides the knowledge needed to design various wireless systems, such as cellular communication systems, RFID and ad hoc wireless communication systems. It gives a detailed introduction to aspects of channels before presenting the novel estimation and modelling techniques which can be used to achieve accurate models. To systematically guide readers through the topic, the book is organised in three distinct parts. The first part covers the fundamentals of the characterization of propagation channels, including the conventional single-input single-output (SISO) propagation channel characterization as well as its extension to multiple-input multiple-output (MIMO) cases. Part two focuses on channel measurements and channel data post-processing. Wideband channel measurements are introduced, including the equipment, technology and advantages and disadvantages of different data acquisition schemes. The channel parameter estimation methods are ...

2. PARAMETER ESTIMATION OF THE HYBRID CENSORED LOMAX DISTRIBUTION

Directory of Open Access Journals (Sweden)

Samir Kamel Ashour

2010-12-01

Full Text Available Survival analysis is used in various fields for analyzing data involving the duration between two events. It is also known as event history analysis, lifetime data analysis, reliability analysis or time to event analysis. One of the difficulties which arise in this area is the presence of censored data. The lifetime of an individual is censored when it cannot be exactly measured but partial information is available. Different circumstances can produce different types of censoring. The two most common censoring schemes used in life testing experiments are Type-I and Type-II censoring schemes. Hybrid censoring scheme is mixture of Type-I and Type-II censoring scheme. In this paper we consider the estimation of parameters of Lomax distribution based on hybrid censored data. The parameters are estimated by the maximum likelihood and Bayesian methods. The Fisher information matrix has been obtained and it can be used for constructing asymptotic confidence intervals.

3. A Bayesian framework for parameter estimation in dynamical models.

Directory of Open Access Journals (Sweden)

Flávio Codeço Coelho

Full Text Available Mathematical models in biology are powerful tools for the study and exploration of complex dynamics. Nevertheless, bringing theoretical results to an agreement with experimental observations involves acknowledging a great deal of uncertainty intrinsic to our theoretical representation of a real system. Proper handling of such uncertainties is key to the successful usage of models to predict experimental or field observations. This problem has been addressed over the years by many tools for model calibration and parameter estimation. In this article we present a general framework for uncertainty analysis and parameter estimation that is designed to handle uncertainties associated with the modeling of dynamic biological systems while remaining agnostic as to the type of model used. We apply the framework to fit an SIR-like influenza transmission model to 7 years of incidence data in three European countries: Belgium, the Netherlands and Portugal.

4. CosmoSIS: A System for MC Parameter Estimation

Energy Technology Data Exchange (ETDEWEB)

Zuntz, Joe [Manchester U.; Paterno, Marc [Fermilab; Jennings, Elise [Chicago U., EFI; Rudd, Douglas [U. Chicago; Manzotti, Alessandro [Chicago U., Astron. Astrophys. Ctr.; Dodelson, Scott [Chicago U., Astron. Astrophys. Ctr.; Bridle, Sarah [Manchester U.; Sehrish, Saba [Fermilab; Kowalkowski, James [Fermilab

2015-01-01

Cosmological parameter estimation is entering a new era. Large collaborations need to coordinate high-stakes analyses using multiple methods; furthermore such analyses have grown in complexity due to sophisticated models of cosmology and systematic uncertainties. In this paper we argue that modularity is the key to addressing these challenges: calculations should be broken up into interchangeable modular units with inputs and outputs clearly defined. We present a new framework for cosmological parameter estimation, CosmoSIS, designed to connect together, share, and advance development of inference tools across the community. We describe the modules already available in Cosmo- SIS, including camb, Planck, cosmic shear calculations, and a suite of samplers. We illustrate it using demonstration code that you can run out-of-the-box with the installer available at http://bitbucket.org/joezuntz/cosmosis.

5. On Using Exponential Parameter Estimators with an Adaptive Controller

Science.gov (United States)

Patre, Parag; Joshi, Suresh M.

2011-01-01

Typical adaptive controllers are restricted to using a specific update law to generate parameter estimates. This paper investigates the possibility of using any exponential parameter estimator with an adaptive controller such that the system tracks a desired trajectory. The goal is to provide flexibility in choosing any update law suitable for a given application. The development relies on a previously developed concept of controller/update law modularity in the adaptive control literature, and the use of a converse Lyapunov-like theorem. Stability analysis is presented to derive gain conditions under which this is possible, and inferences are made about the tracking error performance. The development is based on a class of Euler-Lagrange systems that are used to model various engineering systems including space robots and manipulators.

6. Basic Earth's Parameters as estimated from VLBI observations

Directory of Open Access Journals (Sweden)

Ping Zhu

2017-11-01

Full Text Available The global Very Long Baseline Interferometry observation for measuring the Earth rotation's parameters was launched around 1970s. Since then the precision of the measurements is continuously improving by taking into account various instrumental and environmental effects. The MHB2000 nutation model was introduced in 2002, which is constructed based on a revised nutation series derived from 20 years VLBI observations (1980–1999. In this work, we firstly estimated the amplitudes of all nutation terms from the IERS-EOP-C04 VLBI global solutions w.r.t. IAU1980, then we further inferred the BEPs (Basic Earth's Parameters by fitting the major nutation terms. Meanwhile, the BEPs were obtained from the same nutation time series using a BI (Bayesian Inversion. The corrections to the precession rate and the estimated BEPs are in an agreement, independent of which methods have been applied.

7. Set-base dynamical parameter estimation and model invalidation for biochemical reaction networks.

Science.gov (United States)

Rumschinski, Philipp; Borchers, Steffen; Bosio, Sandro; Weismantel, Robert; Findeisen, Rolf

2010-05-25

Mathematical modeling and analysis have become, for the study of biological and cellular processes, an important complement to experimental research. However, the structural and quantitative knowledge available for such processes is frequently limited, and measurements are often subject to inherent and possibly large uncertainties. This results in competing model hypotheses, whose kinetic parameters may not be experimentally determinable. Discriminating among these alternatives and estimating their kinetic parameters is crucial to improve the understanding of the considered process, and to benefit from the analytical tools at hand. In this work we present a set-based framework that allows to discriminate between competing model hypotheses and to provide guaranteed outer estimates on the model parameters that are consistent with the (possibly sparse and uncertain) experimental measurements. This is obtained by means of exact proofs of model invalidity that exploit the polynomial/rational structure of biochemical reaction networks, and by making use of an efficient strategy to balance solution accuracy and computational effort. The practicability of our approach is illustrated with two case studies. The first study shows that our approach allows to conclusively rule out wrong model hypotheses. The second study focuses on parameter estimation, and shows that the proposed method allows to evaluate the global influence of measurement sparsity, uncertainty, and prior knowledge on the parameter estimates. This can help in designing further experiments leading to improved parameter estimates.

8. PARAMETER ESTIMATION AND MODEL SELECTION FOR INDOOR ENVIRONMENTS BASED ON SPARSE OBSERVATIONS

Directory of Open Access Journals (Sweden)

Y. Dehbi

2017-09-01

Full Text Available This paper presents a novel method for the parameter estimation and model selection for the reconstruction of indoor environments based on sparse observations. While most approaches for the reconstruction of indoor models rely on dense observations, we predict scenes of the interior with high accuracy in the absence of indoor measurements. We use a model-based top-down approach and incorporate strong but profound prior knowledge. The latter includes probability density functions for model parameters and sparse observations such as room areas and the building footprint. The floorplan model is characterized by linear and bi-linear relations with discrete and continuous parameters. We focus on the stochastic estimation of model parameters based on a topological model derived by combinatorial reasoning in a first step. A Gauss-Markov model is applied for estimation and simulation of the model parameters. Symmetries are represented and exploited during the estimation process. Background knowledge as well as observations are incorporated in a maximum likelihood estimation and model selection is performed with AIC/BIC. The likelihood is also used for the detection and correction of potential errors in the topological model. Estimation results are presented and discussed.

9. Parameter Estimation and Model Selection for Indoor Environments Based on Sparse Observations

Science.gov (United States)

Dehbi, Y.; Loch-Dehbi, S.; Plümer, L.

2017-09-01

This paper presents a novel method for the parameter estimation and model selection for the reconstruction of indoor environments based on sparse observations. While most approaches for the reconstruction of indoor models rely on dense observations, we predict scenes of the interior with high accuracy in the absence of indoor measurements. We use a model-based top-down approach and incorporate strong but profound prior knowledge. The latter includes probability density functions for model parameters and sparse observations such as room areas and the building footprint. The floorplan model is characterized by linear and bi-linear relations with discrete and continuous parameters. We focus on the stochastic estimation of model parameters based on a topological model derived by combinatorial reasoning in a first step. A Gauss-Markov model is applied for estimation and simulation of the model parameters. Symmetries are represented and exploited during the estimation process. Background knowledge as well as observations are incorporated in a maximum likelihood estimation and model selection is performed with AIC/BIC. The likelihood is also used for the detection and correction of potential errors in the topological model. Estimation results are presented and discussed.

10. Estimation of parameters of interior permanent magnet synchronous motors

International Nuclear Information System (INIS)

Hwang, C.C.; Chang, S.M.; Pan, C.T.; Chang, T.Y.

2002-01-01

This paper presents a magnetic circuit model to the estimation of machine parameters of an interior permanent magnet synchronous machine. It extends the earlier work of Hwang and Cho that focused mainly on the magnetic aspects of motor design. The proposed model used to calculate EMF, d- and q-axis reactances. These calculations are compared to those from finite element analysis and measurement with good agreement

11. Estimation of Kinetic Parameters in an Automotive SCR Catalyst Model

DEFF Research Database (Denmark)

Åberg, Andreas; Widd, Anders; Abildskov, Jens

2016-01-01

be used directly for accurate full-scale transient simulations. The model was validated against full-scale data with an engine following the European Transient Cycle. The validation showed that the predictive capability for nitrogen oxides (NOx) was satisfactory. After re-estimation of the adsorption...... and desorption parameters with full-scale transient data, the fit for both NOx and NH3-slip was satisfactory....

12. Fundamental limits of radio interferometers: calibration and source parameter estimation

OpenAIRE

Trott, Cathryn M.; Wayth, Randall B.; Tingay, Steven J.

2012-01-01

We use information theory to derive fundamental limits on the capacity to calibrate next-generation radio interferometers, and measure parameters of point sources for instrument calibration, point source subtraction, and data deconvolution. We demonstrate the implications of these fundamental limits, with particular reference to estimation of the 21cm Epoch of Reionization power spectrum with next-generation low-frequency instruments (e.g., the Murchison Widefield Array -- MWA, Precision Arra...

13. Robust estimation of track parameters in wire chambers

International Nuclear Information System (INIS)

Bogdanova, N.B.; Bourilkov, D.T.

1988-01-01

The aim of this paper is to compare numerically the possibilities of the least square fit (LSF) and robust methods for modelled and real track data to determine the linear regression parameters of charged particles in wire chambers. It is shown, that Tukey robust estimate is superior to more standard (versions of LSF) methods. The efficiency of the method is illustrated by tables and figures for some important physical characteristics

14. Factorized Estimation of Partially Shared Parameters in Diffusion Networks

Czech Academy of Sciences Publication Activity Database

2017-01-01

Roč. 65, č. 19 (2017), s. 5153-5163 ISSN 1053-587X R&D Projects: GA ČR(CZ) GP14-06678P; GA ČR GA16-09848S Institutional support: RVO:67985556 Keywords : Diffusion network * Diffusion estimation * Heterogeneous parameters * Multitask networks Subject RIV: BD - Theory of Information OBOR OECD: Applied mathematics Impact factor: 4.300, year: 2016 http://library.utia.cas.cz/separaty/2017/AS/dedecius-0477044.pdf

15. Estimation of parameters of interior permanent magnet synchronous motors

CERN Document Server

Hwang, C C; Pan, C T; Chang, T Y

2002-01-01

This paper presents a magnetic circuit model to the estimation of machine parameters of an interior permanent magnet synchronous machine. It extends the earlier work of Hwang and Cho that focused mainly on the magnetic aspects of motor design. The proposed model used to calculate EMF, d- and q-axis reactances. These calculations are compared to those from finite element analysis and measurement with good agreement.

16. CTER-rapid estimation of CTF parameters with error assessment.

Science.gov (United States)

Penczek, Pawel A; Fang, Jia; Li, Xueming; Cheng, Yifan; Loerke, Justus; Spahn, Christian M T

2014-05-01

In structural electron microscopy, the accurate estimation of the Contrast Transfer Function (CTF) parameters, particularly defocus and astigmatism, is of utmost importance for both initial evaluation of micrograph quality and for subsequent structure determination. Due to increases in the rate of data collection on modern microscopes equipped with new generation cameras, it is also important that the CTF estimation can be done rapidly and with minimal user intervention. Finally, in order to minimize the necessity for manual screening of the micrographs by a user it is necessary to provide an assessment of the errors of fitted parameters values. In this work we introduce CTER, a CTF parameters estimation method distinguished by its computational efficiency. The efficiency of the method makes it suitable for high-throughput EM data collection, and enables the use of a statistical resampling technique, bootstrap, that yields standard deviations of estimated defocus and astigmatism amplitude and angle, thus facilitating the automation of the process of screening out inferior micrograph data. Furthermore, CTER also outputs the spatial frequency limit imposed by reciprocal space aliasing of the discrete form of the CTF and the finite window size. We demonstrate the efficiency and accuracy of CTER using a data set collected on a 300kV Tecnai Polara (FEI) using the K2 Summit DED camera in super-resolution counting mode. Using CTER we obtained a structure of the 80S ribosome whose large subunit had a resolution of 4.03Å without, and 3.85Å with, inclusion of astigmatism parameters. Copyright © 2014 Elsevier B.V. All rights reserved.

17. Estimation of solid earth tidal parameters and FCN with VLBI

International Nuclear Information System (INIS)

Krásná, H.

2012-01-01

Measurements of a space-geodetic technique VLBI (Very Long Baseline Interferometry) are influenced by a variety of processes which have to be modelled and put as a priori information into the analysis of the space-geodetic data. The increasing accuracy of the VLBI measurements allows access to these parameters and provides possibilities to validate them directly from the measured data. The gravitational attraction of the Moon and the Sun causes deformation of the Earth's surface which can reach several decimetres in radial direction during a day. The displacement is a function of the so-called Love and Shida numbers. Due to the present accuracy of the VLBI measurements the parameters have to be specified as complex numbers, where the imaginary parts describe the anelasticity of the Earth's mantle. Moreover, it is necessary to distinguish between the single tides within the various frequency bands. In this thesis, complex Love and Shida numbers of twelve diurnal and five long-period tides included in the solid Earth tidal displacement modelling are estimated directly from the 27 years of VLBI measurements (1984.0 - 2011.0). In this work, the period of the Free Core Nutation (FCN) is estimated which shows up in the frequency dependent solid Earth tidal displacement as well as in a nutation model describing the motion of the Earth's axis in space. The FCN period in both models is treated as a single parameter and it is estimated in a rigorous global adjustment of the VLBI data. The obtained value of -431.18 ± 0.10 sidereal days differs slightly from the conventional value -431.39 sidereal days given in IERS Conventions 2010. An empirical FCN model based on variable amplitude and phase is determined, whose parameters are estimated in yearly steps directly within VLBI global solutions. (author) [de

18. Distributed Dynamic State Estimator, Generator Parameter Estimation and Stability Monitoring Demonstration

Energy Technology Data Exchange (ETDEWEB)

Meliopoulos, Sakis [Georgia Inst. of Technology, Atlanta, GA (United States); Cokkinides, George [Georgia Inst. of Technology, Atlanta, GA (United States); Fardanesh, Bruce [New York Power Authority, NY (United States); Hedrington, Clinton [U.S. Virgin Islands Water and Power Authority (WAPA), St. Croix (U.S. Virgin Islands)

2013-12-31

This is the final report for this project that was performed in the period: October1, 2009 to June 30, 2013. In this project, a fully distributed high-fidelity dynamic state estimator (DSE) that continuously tracks the real time dynamic model of a wide area system with update rates better than 60 times per second is achieved. The proposed technology is based on GPS-synchronized measurements but also utilizes data from all available Intelligent Electronic Devices in the system (numerical relays, digital fault recorders, digital meters, etc.). The distributed state estimator provides the real time model of the system not only the voltage phasors. The proposed system provides the infrastructure for a variety of applications and two very important applications (a) a high fidelity generating unit parameters estimation and (b) an energy function based transient stability monitoring of a wide area electric power system with predictive capability. Also the dynamic distributed state estimation results are stored (the storage scheme includes data and coincidental model) enabling an automatic reconstruction and “play back” of a system wide disturbance. This approach enables complete play back capability with fidelity equal to that of real time with the advantage of “playing back” at a user selected speed. The proposed technologies were developed and tested in the lab during the first 18 months of the project and then demonstrated on two actual systems, the USVI Water and Power Administration system and the New York Power Authority’s Blenheim-Gilboa pumped hydro plant in the last 18 months of the project. The four main thrusts of this project, mentioned above, are extremely important to the industry. The DSE with the achieved update rates (more than 60 times per second) provides a superior solution to the “grid visibility” question. The generator parameter identification method fills an important and practical need of the industry. The “energy function” based

19. Parameter estimation of component reliability models in PSA model of Krsko NPP

International Nuclear Information System (INIS)

Jordan Cizelj, R.; Vrbanic, I.

2001-01-01

In the paper, the uncertainty analysis of component reliability models for independent failures is shown. The present approach for parameter estimation of component reliability models in NPP Krsko is presented. Mathematical approaches for different types of uncertainty analyses are introduced and used in accordance with some predisposed requirements. Results of the uncertainty analyses are shown in an example for time-related components. As the most appropriate uncertainty analysis proved the Bayesian estimation with the numerical estimation of a posterior, which can be approximated with some appropriate probability distribution, in this paper with lognormal distribution.(author)

20. Comparison of sampling techniques for Bayesian parameter estimation

Science.gov (United States)

Allison, Rupert; Dunkley, Joanna

2014-02-01

The posterior probability distribution for a set of model parameters encodes all that the data have to tell us in the context of a given model; it is the fundamental quantity for Bayesian parameter estimation. In order to infer the posterior probability distribution we have to decide how to explore parameter space. Here we compare three prescriptions for how parameter space is navigated, discussing their relative merits. We consider Metropolis-Hasting sampling, nested sampling and affine-invariant ensemble Markov chain Monte Carlo (MCMC) sampling. We focus on their performance on toy-model Gaussian likelihoods and on a real-world cosmological data set. We outline the sampling algorithms themselves and elaborate on performance diagnostics such as convergence time, scope for parallelization, dimensional scaling, requisite tunings and suitability for non-Gaussian distributions. We find that nested sampling delivers high-fidelity estimates for posterior statistics at low computational cost, and should be adopted in favour of Metropolis-Hastings in many cases. Affine-invariant MCMC is competitive when computing clusters can be utilized for massive parallelization. Affine-invariant MCMC and existing extensions to nested sampling naturally probe multimodal and curving distributions.

1. Automatic estimation of elasticity parameters in breast tissue

Science.gov (United States)

Skerl, Katrin; Cochran, Sandy; Evans, Andrew

2014-03-01

Shear wave elastography (SWE), a novel ultrasound imaging technique, can provide unique information about cancerous tissue. To estimate elasticity parameters, a region of interest (ROI) is manually positioned over the stiffest part of the shear wave image (SWI). The aim of this work is to estimate the elasticity parameters i.e. mean elasticity, maximal elasticity and standard deviation, fully automatically. Ultrasonic SWI of a breast elastography phantom and breast tissue in vivo were acquired using the Aixplorer system (SuperSonic Imagine, Aix-en-Provence, France). First, the SWI within the ultrasonic B-mode image was detected using MATLAB then the elasticity values were extracted. The ROI was automatically positioned over the stiffest part of the SWI and the elasticity parameters were calculated. Finally all values were saved in a spreadsheet which also contains the patient's study ID. This spreadsheet is easily available for physicians and clinical staff for further evaluation and so increase efficiency. Therewith the efficiency is increased. This algorithm simplifies the handling, especially for the performance and evaluation of clinical trials. The SWE processing method allows physicians easy access to the elasticity parameters of the examinations from their own and other institutions. This reduces clinical time and effort and simplifies evaluation of data in clinical trials. Furthermore, reproducibility will be improved.

2. Rapid estimation of high-parameter auditory-filter shapes

Science.gov (United States)

Shen, Yi; Sivakumar, Rajeswari; Richards, Virginia M.

2014-01-01

A Bayesian adaptive procedure, the quick-auditory-filter (qAF) procedure, was used to estimate auditory-filter shapes that were asymmetric about their peaks. In three experiments, listeners who were naive to psychoacoustic experiments detected a fixed-level, pure-tone target presented with a spectrally notched noise masker. The qAF procedure adaptively manipulated the masker spectrum level and the position of the masker notch, which was optimized for the efficient estimation of the five parameters of an auditory-filter model. Experiment I demonstrated that the qAF procedure provided a convergent estimate of the auditory-filter shape at 2 kHz within 150 to 200 trials (approximately 15 min to complete) and, for a majority of listeners, excellent test-retest reliability. In experiment II, asymmetric auditory filters were estimated for target frequencies of 1 and 4 kHz and target levels of 30 and 50 dB sound pressure level. The estimated filter shapes were generally consistent with published norms, especially at the low target level. It is known that the auditory-filter estimates are narrower for forward masking than simultaneous masking due to peripheral suppression, a result replicated in experiment III using fewer than 200 qAF trials. PMID:25324086

3. Estimating demographic parameters from large-scale population genomic data using Approximate Bayesian Computation

Directory of Open Access Journals (Sweden)

Li Sen

2012-03-01

Full Text Available Abstract Background The Approximate Bayesian Computation (ABC approach has been used to infer demographic parameters for numerous species, including humans. However, most applications of ABC still use limited amounts of data, from a small number of loci, compared to the large amount of genome-wide population-genetic data which have become available in the last few years. Results We evaluated the performance of the ABC approach for three 'population divergence' models - similar to the 'isolation with migration' model - when the data consists of several hundred thousand SNPs typed for multiple individuals by simulating data from known demographic models. The ABC approach was used to infer demographic parameters of interest and we compared the inferred values to the true parameter values that was used to generate hypothetical "observed" data. For all three case models, the ABC approach inferred most demographic parameters quite well with narrow credible intervals, for example, population divergence times and past population sizes, but some parameters were more difficult to infer, such as population sizes at present and migration rates. We compared the ability of different summary statistics to infer demographic parameters, including haplotype and LD based statistics, and found that the accuracy of the parameter estimates can be improved by combining summary statistics that capture different parts of information in the data. Furthermore, our results suggest that poor choices of prior distributions can in some circumstances be detected using ABC. Finally, increasing the amount of data beyond some hundred loci will substantially improve the accuracy of many parameter estimates using ABC. Conclusions We conclude that the ABC approach can accommodate realistic genome-wide population genetic data, which may be difficult to analyze with full likelihood approaches, and that the ABC can provide accurate and precise inference of demographic parameters from

4. Basic MR sequence parameters systematically bias automated brain volume estimation

International Nuclear Information System (INIS)

Haller, Sven; Falkovskiy, Pavel; Roche, Alexis; Marechal, Benedicte; Meuli, Reto; Thiran, Jean-Philippe; Krueger, Gunnar; Lovblad, Karl-Olof; Kober, Tobias

2016-01-01

Automated brain MRI morphometry, including hippocampal volumetry for Alzheimer disease, is increasingly recognized as a biomarker. Consequently, a rapidly increasing number of software tools have become available. We tested whether modifications of simple MR protocol parameters typically used in clinical routine systematically bias automated brain MRI segmentation results. The study was approved by the local ethical committee and included 20 consecutive patients (13 females, mean age 75.8 ± 13.8 years) undergoing clinical brain MRI at 1.5 T for workup of cognitive decline. We compared three 3D T1 magnetization prepared rapid gradient echo (MPRAGE) sequences with the following parameter settings: ADNI-2 1.2 mm iso-voxel, no image filtering, LOCAL- 1.0 mm iso-voxel no image filtering, LOCAL+ 1.0 mm iso-voxel with image edge enhancement. Brain segmentation was performed by two different and established analysis tools, FreeSurfer and MorphoBox, using standard parameters. Spatial resolution (1.0 versus 1.2 mm iso-voxel) and modification in contrast resulted in relative estimated volume difference of up to 4.28 % (p < 0.001) in cortical gray matter and 4.16 % (p < 0.01) in hippocampus. Image data filtering resulted in estimated volume difference of up to 5.48 % (p < 0.05) in cortical gray matter. A simple change of MR parameters, notably spatial resolution, contrast, and filtering, may systematically bias results of automated brain MRI morphometry of up to 4-5 %. This is in the same range as early disease-related brain volume alterations, for example, in Alzheimer disease. Automated brain segmentation software packages should therefore require strict MR parameter selection or include compensatory algorithms to avoid MR parameter-related bias of brain morphometry results. (orig.)

5. Impact of relativistic effects on cosmological parameter estimation

Science.gov (United States)

Lorenz, Christiane S.; Alonso, David; Ferreira, Pedro G.

2018-01-01

Future surveys will access large volumes of space and hence very long wavelength fluctuations of the matter density and gravitational field. It has been argued that the set of secondary effects that affect the galaxy distribution, relativistic in nature, will bring new, complementary cosmological constraints. We study this claim in detail by focusing on a subset of wide-area future surveys: Stage-4 cosmic microwave background experiments and photometric redshift surveys. In particular, we look at the magnification lensing contribution to galaxy clustering and general-relativistic corrections to all observables. We quantify the amount of information encoded in these effects in terms of the tightening of the final cosmological constraints as well as the potential bias in inferred parameters associated with neglecting them. We do so for a wide range of cosmological parameters, covering neutrino masses, standard dark-energy parametrizations and scalar-tensor gravity theories. Our results show that, while the effect of lensing magnification to number counts does not contain a significant amount of information when galaxy clustering is combined with cosmic shear measurements, this contribution does play a significant role in biasing estimates on a host of parameter families if unaccounted for. Since the amplitude of the magnification term is controlled by the slope of the source number counts with apparent magnitude, s (z ), we also estimate the accuracy to which this quantity must be known to avoid systematic parameter biases, finding that future surveys will need to determine s (z ) to the ˜5 %- 10 % level. On the contrary, large-scale general-relativistic corrections are irrelevant both in terms of information content and parameter bias for most cosmological parameters but significant for the level of primordial non-Gaussianity.

6. Basic MR sequence parameters systematically bias automated brain volume estimation

Energy Technology Data Exchange (ETDEWEB)

Haller, Sven [University of Geneva, Faculty of Medicine, Geneva (Switzerland); Affidea Centre de Diagnostique Radiologique de Carouge CDRC, Geneva (Switzerland); Falkovskiy, Pavel; Roche, Alexis; Marechal, Benedicte [Siemens Healthcare HC CEMEA SUI DI BM PI, Advanced Clinical Imaging Technology, Lausanne (Switzerland); University Hospital (CHUV), Department of Radiology, Lausanne (Switzerland); Meuli, Reto [University Hospital (CHUV), Department of Radiology, Lausanne (Switzerland); Thiran, Jean-Philippe [LTS5, Ecole Polytechnique Federale de Lausanne, Lausanne (Switzerland); Krueger, Gunnar [Siemens Medical Solutions USA, Inc., Boston, MA (United States); Lovblad, Karl-Olof [University of Geneva, Faculty of Medicine, Geneva (Switzerland); University Hospitals of Geneva, Geneva (Switzerland); Kober, Tobias [Siemens Healthcare HC CEMEA SUI DI BM PI, Advanced Clinical Imaging Technology, Lausanne (Switzerland); LTS5, Ecole Polytechnique Federale de Lausanne, Lausanne (Switzerland)

2016-11-15

Automated brain MRI morphometry, including hippocampal volumetry for Alzheimer disease, is increasingly recognized as a biomarker. Consequently, a rapidly increasing number of software tools have become available. We tested whether modifications of simple MR protocol parameters typically used in clinical routine systematically bias automated brain MRI segmentation results. The study was approved by the local ethical committee and included 20 consecutive patients (13 females, mean age 75.8 ± 13.8 years) undergoing clinical brain MRI at 1.5 T for workup of cognitive decline. We compared three 3D T1 magnetization prepared rapid gradient echo (MPRAGE) sequences with the following parameter settings: ADNI-2 1.2 mm iso-voxel, no image filtering, LOCAL- 1.0 mm iso-voxel no image filtering, LOCAL+ 1.0 mm iso-voxel with image edge enhancement. Brain segmentation was performed by two different and established analysis tools, FreeSurfer and MorphoBox, using standard parameters. Spatial resolution (1.0 versus 1.2 mm iso-voxel) and modification in contrast resulted in relative estimated volume difference of up to 4.28 % (p < 0.001) in cortical gray matter and 4.16 % (p < 0.01) in hippocampus. Image data filtering resulted in estimated volume difference of up to 5.48 % (p < 0.05) in cortical gray matter. A simple change of MR parameters, notably spatial resolution, contrast, and filtering, may systematically bias results of automated brain MRI morphometry of up to 4-5 %. This is in the same range as early disease-related brain volume alterations, for example, in Alzheimer disease. Automated brain segmentation software packages should therefore require strict MR parameter selection or include compensatory algorithms to avoid MR parameter-related bias of brain morphometry results. (orig.)

7. Estimating demographic parameters using a combination of known-fate and open N-mixture models.

Science.gov (United States)

Schmidt, Joshua H; Johnson, Devin S; Lindberg, Mark S; Adams, Layne G

2015-10-01

Accurate estimates of demographic parameters are required to infer appropriate ecological relationships and inform management actions. Known-fate data from marked individuals are commonly used to estimate survival rates, whereas N-mixture models use count data from unmarked individuals to estimate multiple demographic parameters. However, a joint approach combining the strengths of both analytical tools has not been developed. Here we develop an integrated model combining known-fate and open N-mixture models, allowing the estimation of detection probability, recruitment, and the joint estimation of survival. We demonstrate our approach through both simulations and an applied example using four years of known-fate and pack count data for wolves (Canis lupus). Simulation results indicated that the integrated model reliably recovered parameters with no evidence of bias, and survival estimates were more precise under the joint model. Results from the applied example indicated that the marked sample of wolves was biased toward individuals with higher apparent survival rates than the unmarked pack mates, suggesting that joint estimates may be more representative of the overall population. Our integrated model is a practical approach for reducing bias while increasing precision and the amount of information gained from mark-resight data sets. We provide implementations in both the BUGS language and an R package.

8. Statistical approach for uncertainty quantification of experimental modal model parameters

DEFF Research Database (Denmark)

Luczak, M.; Peeters, B.; Kahsin, M.

2014-01-01

Composite materials are widely used in manufacture of aerospace and wind energy structural components. These load carrying structures are subjected to dynamic time-varying loading conditions. Robust structural dynamics identification procedure impose tight constraints on the quality of modal models...... represent different complexity levels ranging from coupon, through sub-component up to fully assembled aerospace and wind energy structural components made of composite materials. The proposed method is demonstrated on two application cases of a small and large wind turbine blade........ This paper aims at a systematic approach for uncertainty quantification of the parameters of the modal models estimated from experimentally obtained data. Statistical analysis of modal parameters is implemented to derive an assessment of the entire modal model uncertainty measure. Investigated structures...

9. Estimation of the reliability function for two-parameter exponentiated Rayleigh or Burr type X distribution

Directory of Open Access Journals (Sweden)

Anupam Pathak

2014-11-01

Full Text Available Abstract: Problem Statement: The two-parameter exponentiated Rayleigh distribution has been widely used especially in the modelling of life time event data. It provides a statistical model which has a wide variety of application in many areas and the main advantage is its ability in the context of life time event among other distributions. The uniformly minimum variance unbiased and maximum likelihood estimation methods are the way to estimate the parameters of the distribution. In this study we explore and compare the performance of the uniformly minimum variance unbiased and maximum likelihood estimators of the reliability function R(t=P(X>t and P=P(X>Y for the two-parameter exponentiated Rayleigh distribution. Approach: A new technique of obtaining these parametric functions is introduced in which major role is played by the powers of the parameter(s and the functional forms of the parametric functions to be estimated are not needed.  We explore the performance of these estimators numerically under varying conditions. Through the simulation study a comparison are made on the performance of these estimators with respect to the Biasness, Mean Square Error (MSE, 95% confidence length and corresponding coverage percentage. Conclusion: Based on the results of simulation study the UMVUES of R(t and ‘P’ for the two-parameter exponentiated Rayleigh distribution found to be superior than MLES of R(t and ‘P’.

10. Modified polarimetric bidirectional reflectance distribution function with diffuse scattering: surface parameter estimation

Science.gov (United States)

Zhan, Hanyu; Voelz, David G.

2016-12-01

The polarimetric bidirectional reflectance distribution function (pBRDF) describes the relationships between incident and scattered Stokes parameters, but the familiar surface-only microfacet pBRDF cannot capture diffuse scattering contributions and depolarization phenomena. We propose a modified pBRDF model with a diffuse scattering component developed from the Kubelka-Munk and Le Hors et al. theories, and apply it in the development of a method to jointly estimate refractive index, slope variance, and diffuse scattering parameters from a series of Stokes parameter measurements of a surface. An application of the model and estimation approach to experimental data published by Priest and Meier shows improved correspondence with measurements of normalized Mueller matrix elements. By converting the Stokes/Mueller calculus formulation of the model to a degree of polarization (DOP) description, the estimation results of the parameters from measured DOP values are found to be consistent with a previous DOP model and results.

11. Estimating unknown parameters in haemophilia using expert judgement elicitation.

Science.gov (United States)

Fischer, K; Lewandowski, D; Janssen, M P

2013-09-01

The increasing attention to healthcare costs and treatment efficiency has led to an increasing demand for quantitative data concerning patient and treatment characteristics in haemophilia. However, most of these data are difficult to obtain. The aim of this study was to use expert judgement elicitation (EJE) to estimate currently unavailable key parameters for treatment models in severe haemophilia A. Using a formal expert elicitation procedure, 19 international experts provided information on (i) natural bleeding frequency according to age and onset of bleeding, (ii) treatment of bleeds, (iii) time needed to control bleeding after starting secondary prophylaxis, (iv) dose requirements for secondary prophylaxis according to onset of bleeding, and (v) life-expectancy. For each parameter experts provided their quantitative estimates (median, P10, P90), which were combined using a graphical method. In addition, information was obtained concerning key decision parameters of haemophilia treatment. There was most agreement between experts regarding bleeding frequencies for patients treated on demand with an average onset of joint bleeding (1.7 years): median 12 joint bleeds per year (95% confidence interval 0.9-36) for patients ≤ 18, and 11 (0.8-61) for adult patients. Less agreement was observed concerning estimated effective dose for secondary prophylaxis in adults: median 2000 IU every other day The majority (63%) of experts expected that a single minor joint bleed could cause irreversible damage, and would accept up to three minor joint bleeds or one trauma related joint bleed annually on prophylaxis. Expert judgement elicitation allowed structured capturing of quantitative expert estimates. It generated novel data to be used in computer modelling, clinical care, and trial design. © 2013 John Wiley & Sons Ltd.

12. Estimation of mean-reverting oil prices: a laboratory approach

International Nuclear Information System (INIS)

Bjerksund, P.; Stensland, G.

1993-12-01

Many economic decision support tools developed for the oil industry are based on the future oil price dynamics being represented by some specified stochastic process. To meet the demand for necessary data, much effort is allocated to parameter estimation based on historical oil price time series. The approach in this paper is to implement a complex future oil market model, and to condense the information from the model to parameter estimates for the future oil price. In particular, we use the Lensberg and Rasmussen stochastic dynamic oil market model to generate a large set of possible future oil price paths. Given the hypothesis that the future oil price is generated by a mean-reverting Ornstein-Uhlenbeck process, we obtain parameter estimates by a maximum likelihood procedure. We find a substantial degree of mean-reversion in the future oil price, which in some of our decision examples leads to an almost negligible value of flexibility. 12 refs., 2 figs., 3 tabs

13. The limiting behavior of the estimated parameters in a misspecified random field regression model

DEFF Research Database (Denmark)

Dahl, Christian Møller; Qin, Yu

This paper examines the limiting properties of the estimated parameters in the random field regression model recently proposed by Hamilton (Econometrica, 2001). Though the model is parametric, it enjoys the flexibility of the nonparametric approach since it can approximate a large collection of n...

14. A New Extant Respirometric Assay to Estimate Intrinsic Growth Parameters Applied to Study Plasmid Metabolic Burden

DEFF Research Database (Denmark)

Seoane, Jose Miguel; Sin, Gürkan; Lardon, Laurent

2010-01-01

mathematical treatment, or wake-up pulses prior to the analysis.. Identifiability and sensitivity analysis were performed to confirm the robustness of the new approach for obtaining unique and accurate estimates of growth kinetic parameters. The new experimental design was applied to establish the. metabolic...

15. State of charge estimation of lithium-ion batteries based on an improved parameter identification method

International Nuclear Information System (INIS)

Xia, Bizhong; Chen, Chaoren; Tian, Yong; Wang, Mingwang; Sun, Wei; Xu, Zhihui

2015-01-01

The SOC (state of charge) is the most important index of the battery management systems. However, it cannot be measured directly with sensors and must be estimated with mathematical techniques. An accurate battery model is crucial to exactly estimate the SOC. In order to improve the model accuracy, this paper presents an improved parameter identification method. Firstly, the concept of polarization depth is proposed based on the analysis of polarization characteristics of the lithium-ion batteries. Then, the nonlinear least square technique is applied to determine the model parameters according to data collected from pulsed discharge experiments. The results show that the proposed method can reduce the model error as compared with the conventional approach. Furthermore, a nonlinear observer presented in the previous work is utilized to verify the validity of the proposed parameter identification method in SOC estimation. Finally, experiments with different levels of discharge current are carried out to investigate the influence of polarization depth on SOC estimation. Experimental results show that the proposed method can improve the SOC estimation accuracy as compared with the conventional approach, especially under the conditions of large discharge current. - Highlights: • The polarization characteristics of lithium-ion batteries are analyzed. • The concept of polarization depth is proposed to improve model accuracy. • A nonlinear least square technique is applied to determine the model parameters. • A nonlinear observer is used as the SOC estimation algorithm. • The validity of the proposed method is verified by experimental results.

16. PARAMETER ESTIMATION OF VALVE STICTION USING ANT COLONY OPTIMIZATION

Directory of Open Access Journals (Sweden)

S. Kalaivani

2012-07-01

Full Text Available In this paper, a procedure for quantifying valve stiction in control loops based on ant colony optimization has been proposed. Pneumatic control valves are widely used in the process industry. The control valve contains non-linearities such as stiction, backlash, and deadband that in turn cause oscillations in the process output. Stiction is one of the long-standing problems and it is the most severe problem in the control valves. Thus the measurement data from an oscillating control loop can be used as a possible diagnostic signal to provide an estimate of the stiction magnitude. Quantification of control valve stiction is still a challenging issue. Prior to doing stiction detection and quantification, it is necessary to choose a suitable model structure to describe control-valve stiction. To understand the stiction phenomenon, the Stenman model is used. Ant Colony Optimization (ACO, an intelligent swarm algorithm, proves effective in various fields. The ACO algorithm is inspired from the natural trail following behaviour of ants. The parameters of the Stenman model are estimated using ant colony optimization, from the input-output data by minimizing the error between the actual stiction model output and the simulated stiction model output. Using ant colony optimization, Stenman model with known nonlinear structure and unknown parameters can be estimated.

17. Sensitivity and parameter-estimation precision for alternate LISA configurations

International Nuclear Information System (INIS)

Vallisneri, Michele; Crowder, Jeff; Tinto, Massimo

2008-01-01

We describe a simple framework to assess the LISA scientific performance (more specifically, its sensitivity and expected parameter-estimation precision for prescribed gravitational-wave signals) under the assumption of failure of one or two inter-spacecraft laser measurements (links) and of one to four intra-spacecraft laser measurements. We apply the framework to the simple case of measuring the LISA sensitivity to monochromatic circular binaries, and the LISA parameter-estimation precision for the gravitational-wave polarization angle of these systems. Compared to the six-link baseline configuration, the five-link case is characterized by a small loss in signal-to-noise ratio (SNR) in the high-frequency section of the LISA band; the four-link case shows a reduction by a factor of √2 at low frequencies, and by up to ∼2 at high frequencies. The uncertainty in the estimate of polarization, as computed in the Fisher-matrix formalism, also worsens when moving from six to five, and then to four links: this can be explained by the reduced SNR available in those configurations (except for observations shorter than three months, where five and six links do better than four even with the same SNR). In addition, we prove (for generic signals) that the SNR and Fisher matrix are invariant with respect to the choice of a basis of TDI observables; rather, they depend only on which inter-spacecraft and intra-spacecraft measurements are available

18. Temporal Parameters Estimation for Wheelchair Propulsion Using Wearable Sensors

Directory of Open Access Journals (Sweden)

Manoela Ojeda

2014-01-01

Full Text Available Due to lower limb paralysis, individuals with spinal cord injury (SCI rely on their upper limbs for mobility. The prevalence of upper extremity pain and injury is high among this population. We evaluated the performance of three triaxis accelerometers placed on the upper arm, wrist, and under the wheelchair, to estimate temporal parameters of wheelchair propulsion. Twenty-six participants with SCI were asked to push their wheelchair equipped with a SMARTWheel. The estimated stroke number was compared with the criterion from video observations and the estimated push frequency was compared with the criterion from the SMARTWheel. Mean absolute errors (MAE and mean absolute percentage of error (MAPE were calculated. Intraclass correlation coefficients and Bland-Altman plots were used to assess the agreement. Results showed reasonable accuracies especially using the accelerometer placed on the upper arm where the MAPE was 8.0% for stroke number and 12.9% for push frequency. The ICC was 0.994 for stroke number and 0.916 for push frequency. The wrist and seat accelerometer showed lower accuracy with a MAPE for the stroke number of 10.8% and 13.4% and ICC of 0.990 and 0.984, respectively. Results suggested that accelerometers could be an option for monitoring temporal parameters of wheelchair propulsion.

19. Decommissioning Cost Estimating -The ''Price'' Approach

International Nuclear Information System (INIS)

Manning, R.; Gilmour, J.

2002-01-01

Over the past 9 years UKAEA has developed a formalized approach to decommissioning cost estimating. The estimating methodology and computer-based application are known collectively as the PRICE system. At the heart of the system is a database (the knowledge base) which holds resource demand data on a comprehensive range of decommissioning activities. This data is used in conjunction with project specific information (the quantities of specific components) to produce decommissioning cost estimates. PRICE is a dynamic cost-estimating tool, which can satisfy both strategic planning and project management needs. With a relatively limited analysis a basic PRICE estimate can be produced and used for the purposes of strategic planning. This same estimate can be enhanced and improved, primarily by the improvement of detail, to support sanction expenditure proposals, and also as a tender assessment and project management tool. The paper will: describe the principles of the PRICE estimating system; report on the experiences of applying the system to a wide range of projects from contaminated car parks to nuclear reactors; provide information on the performance of the system in relation to historic estimates, tender bids, and outturn costs

20. Consistency of extreme flood estimation approaches

Science.gov (United States)

Felder, Guido; Paquet, Emmanuel; Penot, David; Zischg, Andreas; Weingartner, Rolf

2017-04-01

Estimations of low-probability flood events are frequently used for the planning of infrastructure as well as for determining the dimensions of flood protection measures. There are several well-established methodical procedures to estimate low-probability floods. However, a global assessment of the consistency of these methods is difficult to achieve, the "true value" of an extreme flood being not observable. Anyway, a detailed comparison performed on a given case study brings useful information about the statistical and hydrological processes involved in different methods. In this study, the following three different approaches for estimating low-probability floods are compared: a purely statistical approach (ordinary extreme value statistics), a statistical approach based on stochastic rainfall-runoff simulation (SCHADEX method), and a deterministic approach (physically based PMF estimation). These methods are tested for two different Swiss catchments. The results and some intermediate variables are used for assessing potential strengths and weaknesses of each method, as well as for evaluating the consistency of these methods.

1. Exploratory Study for Continuous-time Parameter Estimation of Ankle Dynamics

Science.gov (United States)

Kukreja, Sunil L.; Boyle, Richard D.

2014-01-01

Recently, a parallel pathway model to describe ankle dynamics was proposed. This model provides a relationship between ankle angle and net ankle torque as the sum of a linear and nonlinear contribution. A technique to identify parameters of this model in discrete-time has been developed. However, these parameters are a nonlinear combination of the continuous-time physiology, making insight into the underlying physiology impossible. The stable and accurate estimation of continuous-time parameters is critical for accurate disease modeling, clinical diagnosis, robotic control strategies, development of optimal exercise protocols for longterm space exploration, sports medicine, etc. This paper explores the development of a system identification technique to estimate the continuous-time parameters of ankle dynamics. The effectiveness of this approach is assessed via simulation of a continuous-time model of ankle dynamics with typical parameters found in clinical studies. The results show that although this technique improves estimates, it does not provide robust estimates of continuous-time parameters of ankle dynamics. Due to this we conclude that alternative modeling strategies and more advanced estimation techniques be considered for future work.

2. Correcting the bias of empirical frequency parameter estimators in codon models.

Directory of Open Access Journals (Sweden)

Sergei Kosakovsky Pond

2010-07-01

Full Text Available Markov models of codon substitution are powerful inferential tools for studying biological processes such as natural selection and preferences in amino acid substitution. The equilibrium character distributions of these models are almost always estimated using nucleotide frequencies observed in a sequence alignment, primarily as a matter of historical convention. In this note, we demonstrate that a popular class of such estimators are biased, and that this bias has an adverse effect on goodness of fit and estimates of substitution rates. We propose a "corrected" empirical estimator that begins with observed nucleotide counts, but accounts for the nucleotide composition of stop codons. We show via simulation that the corrected estimates outperform the de facto standard estimates not just by providing better estimates of the frequencies themselves, but also by leading to improved estimation of other parameters in the evolutionary models. On a curated collection of sequence alignments, our estimators show a significant improvement in goodness of fit compared to the approach. Maximum likelihood estimation of the frequency parameters appears to be warranted in many cases, albeit at a greater computational cost. Our results demonstrate that there is little justification, either statistical or computational, for continued use of the -style estimators.

3. Multivariate phase type distributions - Applications and parameter estimation

DEFF Research Database (Denmark)

Meisch, David

The best known univariate probability distribution is the normal distribution. It is used throughout the literature in a broad field of applications. In cases where it is not sensible to use the normal distribution alternative distributions are at hand and well understood, many of these belonging...... and statistical inference, is the multivariate normal distribution. Unfortunately only little is known about the general class of multivariate phase type distribution. Considering the results concerning parameter estimation and inference theory of univariate phase type distributions, the class of multivariate...... projects and depend on reliable cost estimates. The Successive Principle is a group analysis method primarily used for analyzing medium to large projects in relation to cost or duration. We believe that the mathematical modeling used in the Successive Principle can be improved. We suggested a novel...

4. Energy parameter estimation in solar powered wireless sensor networks

KAUST Repository

Mousa, Mustafa

2014-02-24

The operation of solar powered wireless sensor networks is associated with numerous challenges. One of the main challenges is the high variability of solar power input and battery capacity, due to factors such as weather, humidity, dust and temperature. In this article, we propose a set of tools that can be implemented onboard high power wireless sensor networks to estimate the battery condition and capacity as well as solar power availability. These parameters are very important to optimize sensing and communications operations and maximize the reliability of the complete system. Experimental results show that the performance of typical Lithium Ion batteries severely degrades outdoors in a matter of weeks or months, and that the availability of solar energy in an urban solar powered wireless sensor network is highly variable, which underlines the need for such power and energy estimation algorithms.

5. Estimation of Aircraft Nonlinear Unsteady Parameters From Wind Tunnel Data

Science.gov (United States)

1998-01-01

Aerodynamic equations were formulated for an aircraft in one-degree-of-freedom large amplitude motion about each of its body axes. The model formulation based on indicial functions separated the resulting aerodynamic forces and moments into static terms, purely rotary terms and unsteady terms. Model identification from experimental data combined stepwise regression and maximum likelihood estimation in a two-stage optimization algorithm that can identify the unsteady term and rotary term if necessary. The identification scheme was applied to oscillatory data in two examples. The model identified from experimental data fit the data well, however, some parameters were estimated with limited accuracy. The resulting model was a good predictor for oscillatory and ramp input data.

6. Optimization-based particle filter for state and parameter estimation

Institute of Scientific and Technical Information of China (English)

Li Fu; Qi Fei; Shi Guangming; Zhang Li

2009-01-01

In recent years, the theory of particle filter has been developed and widely used for state and parameter estimation in nonlinear/non-Gaussian systems. Choosing good importance density is a critical issue in particle filter design. In order to improve the approximation of posterior distribution, this paper provides an optimization-based algorithm (the steepest descent method) to generate the proposal distribution and then sample particles from the distribution. This algorithm is applied in 1-D case, and the simulation results show that the proposed particle filter performs better than the extended Kalman filter (EKF), the standard particle filter (PF), the extended Kalman particle filter (PF-EKF) and the unscented particle filter (UPF) both in efficiency and in estimation precision.

7. Energy parameter estimation in solar powered wireless sensor networks

KAUST Repository

Mousa, Mustafa; Claudel, Christian G.

2014-01-01

The operation of solar powered wireless sensor networks is associated with numerous challenges. One of the main challenges is the high variability of solar power input and battery capacity, due to factors such as weather, humidity, dust and temperature. In this article, we propose a set of tools that can be implemented onboard high power wireless sensor networks to estimate the battery condition and capacity as well as solar power availability. These parameters are very important to optimize sensing and communications operations and maximize the reliability of the complete system. Experimental results show that the performance of typical Lithium Ion batteries severely degrades outdoors in a matter of weeks or months, and that the availability of solar energy in an urban solar powered wireless sensor network is highly variable, which underlines the need for such power and energy estimation algorithms.

8. Estimation of kinematic parameters in CALIFA galaxies: no-assumption on internal dynamics

Science.gov (United States)

García-Lorenzo, B.; Barrera-Ballesteros, J.; CALIFA Team

2016-06-01

We propose a simple approach to homogeneously estimate kinematic parameters of a broad variety of galaxies (elliptical, spirals, irregulars or interacting systems). This methodology avoids the use of any kinematical model or any assumption on internal dynamics. This simple but novel approach allows us to determine: the frequency of kinematic distortions, systemic velocity, kinematic center, and kinematic position angles which are directly measured from the two dimensional-distributions of radial velocities. We test our analysis tools using the CALIFA Survey

9. Parameter Estimation Analysis for Hybrid Adaptive Fault Tolerant Control

Science.gov (United States)

Eshak, Peter B.

Research efforts have increased in recent years toward the development of intelligent fault tolerant control laws, which are capable of helping the pilot to safely maintain aircraft control at post failure conditions. Researchers at West Virginia University (WVU) have been actively involved in the development of fault tolerant adaptive control laws in all three major categories: direct, indirect, and hybrid. The first implemented design to provide adaptation was a direct adaptive controller, which used artificial neural networks to generate augmentation commands in order to reduce the modeling error. Indirect adaptive laws were implemented in another controller, which utilized online PID to estimate and update the controller parameter. Finally, a new controller design was introduced, which integrated both direct and indirect control laws. This controller is known as hybrid adaptive controller. This last control design outperformed the two earlier designs in terms of less NNs effort and better tracking quality. The performance of online PID has an important role in the quality of the hybrid controller; therefore, the quality of the estimation will be of a great importance. Unfortunately, PID is not perfect and the online estimation process has some inherited issues; the online PID estimates are primarily affected by delays and biases. In order to ensure updating reliable estimates to the controller, the estimator consumes some time to converge. Moreover, the estimator will often converge to a biased value. This thesis conducts a sensitivity analysis for the estimation issues, delay and bias, and their effect on the tracking quality. In addition, the performance of the hybrid controller as compared to direct adaptive controller is explored. In order to serve this purpose, a simulation environment in MATLAB/SIMULINK has been created. The simulation environment is customized to provide the user with the flexibility to add different combinations of biases and delays to

10. Estimation of modal parameters using bilinear joint time frequency distributions

Science.gov (United States)

Roshan-Ghias, A.; Shamsollahi, M. B.; Mobed, M.; Behzad, M.

2007-07-01

In this paper, a new method is proposed for modal parameter estimation using time-frequency representations. Smoothed Pseudo Wigner-Ville distribution which is a member of the Cohen's class distributions is used to decouple vibration modes completely in order to study each mode separately. This distribution reduces cross-terms which are troublesome in Wigner-Ville distribution and retains the resolution as well. The method was applied to highly damped systems, and results were superior to those obtained via other conventional methods.

11. Parameter estimation of variable-parameter nonlinear Muskingum model using excel solver

Science.gov (United States)

Kang, Ling; Zhou, Liwei

2018-02-01

Abstract . The Muskingum model is an effective flood routing technology in hydrology and water resources Engineering. With the development of optimization technology, more and more variable-parameter Muskingum models were presented to improve effectiveness of the Muskingum model in recent decades. A variable-parameter nonlinear Muskingum model (NVPNLMM) was proposed in this paper. According to the results of two real and frequently-used case studies by various models, the NVPNLMM could obtain better values of evaluation criteria, which are used to describe the superiority of the estimated outflows and compare the accuracies of flood routing using various models, and the optimal estimated outflows by the NVPNLMM were closer to the observed outflows than the ones by other models.

12. Estimating the parameters of a generalized lambda distribution

International Nuclear Information System (INIS)

Fournier, B.; Rupin, N.; Najjar, D.; Iost, A.; Rupin, N.; Bigerelle, M.; Wilcox, R.; Fournier, B.

2007-01-01

The method of moments is a popular technique for estimating the parameters of a generalized lambda distribution (GLD), but published results suggest that the percentile method gives superior results. However, the percentile method cannot be implemented in an automatic fashion, and automatic methods, like the starship method, can lead to prohibitive execution time with large sample sizes. A new estimation method is proposed that is automatic (it does not require the use of special tables or graphs), and it reduces the computational time. Based partly on the usual percentile method, this new method also requires choosing which quantile u to use when fitting a GLD to data. The choice for u is studied and it is found that the best choice depends on the final goal of the modeling process. The sampling distribution of the new estimator is studied and compared to the sampling distribution of estimators that have been proposed. Naturally, all estimators are biased and here it is found that the bias becomes negligible with sample sizes n ≥ 2 * 10(3). The.025 and.975 quantiles of the sampling distribution are investigated, and the difference between these quantiles is found to decrease proportionally to 1/root n.. The same results hold for the moment and percentile estimates. Finally, the influence of the sample size is studied when a normal distribution is modeled by a GLD. Both bounded and unbounded GLDs are used and the bounded GLD turns out to be the most accurate. Indeed it is shown that, up to n = 10(6), bounded GLD modeling cannot be rejected by usual goodness-of-fit tests. (authors)

13. Parameter extraction using global particle swarm optimization approach and the influence of polymer processing temperature on the solar cell parameters

Science.gov (United States)

Kumar, S.; Singh, A.; Dhar, A.

2017-08-01

The accurate estimation of the photovoltaic parameters is fundamental to gain an insight of the physical processes occurring inside a photovoltaic device and thereby to optimize its design, fabrication processes, and quality. A simulative approach of accurately determining the device parameters is crucial for cell array and module simulation when applied in practical on-field applications. In this work, we have developed a global particle swarm optimization (GPSO) approach to estimate the different solar cell parameters viz., ideality factor (η), short circuit current (Isc), open circuit voltage (Voc), shunt resistant (Rsh), and series resistance (Rs) with wide a search range of over ±100 % for each model parameter. After validating the accurateness and global search power of the proposed approach with synthetic and noisy data, we applied the technique to the extract the PV parameters of ZnO/PCDTBT based hybrid solar cells (HSCs) prepared under different annealing conditions. Further, we examine the variation of extracted model parameters to unveil the physical processes occurring when different annealing temperatures are employed during the device fabrication and establish the role of improved charge transport in polymer films from independent FET measurements. The evolution of surface morphology, optical absorption, and chemical compositional behaviour of PCDTBT co-polymer films as a function of processing temperature has also been captured in the study and correlated with the findings from the PV parameters extracted using GPSO approach.

14. State, Parameter, and Unknown Input Estimation Problems in Active Automotive Safety Applications

Science.gov (United States)

A variety of driver assistance systems such as traction control, electronic stability control (ESC), rollover prevention and lane departure avoidance systems are being developed by automotive manufacturers to reduce driver burden, partially automate normal driving operations, and reduce accidents. The effectiveness of these driver assistance systems can be significant enhanced if the real-time values of several vehicle parameters and state variables, namely tire-road friction coefficient, slip angle, roll angle, and rollover index, can be known. Since there are no inexpensive sensors available to measure these variables, it is necessary to estimate them. However, due to the significant nonlinear dynamics in a vehicle, due to unknown and changing plant parameters, and due to the presence of unknown input disturbances, the design of estimation algorithms for this application is challenging. This dissertation develops a new approach to observer design for nonlinear systems in which the nonlinearity has a globally (or locally) bounded Jacobian. The developed approach utilizes a modified version of the mean value theorem to express the nonlinearity in the estimation error dynamics as a convex combination of known matrices with time varying coefficients. The observer gains are then obtained by solving linear matrix inequalities (LMIs). A number of illustrative examples are presented to show that the developed approach is less conservative and more useful than the standard Lipschitz assumption based nonlinear observer. The developed nonlinear observer is utilized for estimation of slip angle, longitudinal vehicle velocity, and vehicle roll angle. In order to predict and prevent vehicle rollovers in tripped situations, it is necessary to estimate the vertical tire forces in the presence of unknown road disturbance inputs. An approach to estimate unknown disturbance inputs in nonlinear systems using dynamic model inversion and a modified version of the mean value theorem is

15. Analytic continuation by duality estimation of the S parameter

International Nuclear Information System (INIS)

Ignjatovic, S. R.; Wijewardhana, L. C. R.; Takeuchi, T.

2000-01-01

We investigate the reliability of the analytic continuation by duality (ACD) technique in estimating the electroweak S parameter for technicolor theories. The ACD technique, which is an application of finite energy sum rules, relates the S parameter for theories with unknown particle spectra to known OPE coefficients. We identify the sources of error inherent in the technique and evaluate them for several toy models to see if they can be controlled. The evaluation of errors is done analytically and all relevant formulas are provided in appendixes including analytical formulas for approximating the function 1/s with a polynomial in s. The use of analytical formulas protects us from introducing additional errors due to numerical integration. We find that it is very difficult to control the errors even when the momentum dependence of the OPE coefficients is known exactly. In realistic cases in which the momentum dependence of the OPE coefficients is only known perturbatively, it is impossible to obtain a reliable estimate. (c) 2000 The American Physical Society

16. A robust methodology for modal parameters estimation applied to SHM

Science.gov (United States)

Cardoso, Rharã; Cury, Alexandre; Barbosa, Flávio

2017-10-01

The subject of structural health monitoring is drawing more and more attention over the last years. Many vibration-based techniques aiming at detecting small structural changes or even damage have been developed or enhanced through successive researches. Lately, several studies have focused on the use of raw dynamic data to assess information about structural condition. Despite this trend and much skepticism, many methods still rely on the use of modal parameters as fundamental data for damage detection. Therefore, it is of utmost importance that modal identification procedures are performed with a sufficient level of precision and automation. To fulfill these requirements, this paper presents a novel automated time-domain methodology to identify modal parameters based on a two-step clustering analysis. The first step consists in clustering modes estimates from parametric models of different orders, usually presented in stabilization diagrams. In an automated manner, the first clustering analysis indicates which estimates correspond to physical modes. To circumvent the detection of spurious modes or the loss of physical ones, a second clustering step is then performed. The second step consists in the data mining of information gathered from the first step. To attest the robustness and efficiency of the proposed methodology, numerically generated signals as well as experimental data obtained from a simply supported beam tested in laboratory and from a railway bridge are utilized. The results appeared to be more robust and accurate comparing to those obtained from methods based on one-step clustering analysis.

17. Parameter estimation in space systems using recurrent neural networks

Science.gov (United States)

Parlos, Alexander G.; Atiya, Amir F.; Sunkel, John W.

1991-01-01

The identification of time-varying parameters encountered in space systems is addressed, using artificial neural systems. A hybrid feedforward/feedback neural network, namely a recurrent multilayer perception, is used as the model structure in the nonlinear system identification. The feedforward portion of the network architecture provides its well-known interpolation property, while through recurrency and cross-talk, the local information feedback enables representation of temporal variations in the system nonlinearities. The standard back-propagation-learning algorithm is modified and it is used for both the off-line and on-line supervised training of the proposed hybrid network. The performance of recurrent multilayer perceptron networks in identifying parameters of nonlinear dynamic systems is investigated by estimating the mass properties of a representative large spacecraft. The changes in the spacecraft inertia are predicted using a trained neural network, during two configurations corresponding to the early and late stages of the spacecraft on-orbit assembly sequence. The proposed on-line mass properties estimation capability offers encouraging results, though, further research is warranted for training and testing the predictive capabilities of these networks beyond nominal spacecraft operations.

18. Parameter estimation and hypothesis testing in linear models

CERN Document Server

Koch, Karl-Rudolf

1999-01-01

The necessity to publish the second edition of this book arose when its third German edition had just been published. This second English edition is there­ fore a translation of the third German edition of Parameter Estimation and Hypothesis Testing in Linear Models, published in 1997. It differs from the first English edition by the addition of a new chapter on robust estimation of parameters and the deletion of the section on discriminant analysis, which has been more completely dealt with by the author in the book Bayesian In­ ference with Geodetic Applications, Springer-Verlag, Berlin Heidelberg New York, 1990. Smaller additions and deletions have been incorporated, to im­ prove the text, to point out new developments or to eliminate errors which became apparent. A few examples have been also added. I thank Springer-Verlag for publishing this second edition and for the assistance in checking the translation, although the responsibility of errors remains with the author. I also want to express my thanks...

19. Periodic orbits of hybrid systems and parameter estimation via AD

International Nuclear Information System (INIS)

Guckenheimer, John; Phipps, Eric Todd; Casey, Richard

2004-01-01

Rhythmic, periodic processes are ubiquitous in biological systems; for example, the heart beat, walking, circadian rhythms and the menstrual cycle. Modeling these processes with high fidelity as periodic orbits of dynamical systems is challenging because: (1) (most) nonlinear differential equations can only be solved numerically; (2) accurate computation requires solving boundary value problems; (3) many problems and solutions are only piecewise smooth; (4) many problems require solving differential-algebraic equations; (5) sensitivity information for parameter dependence of solutions requires solving variational equations; and (6) truncation errors in numerical integration degrade performance of optimization methods for parameter estimation. In addition, mathematical models of biological processes frequently contain many poorly-known parameters, and the problems associated with this impedes the construction of detailed, high-fidelity models. Modelers are often faced with the difficult problem of using simulations of a nonlinear model, with complex dynamics and many parameters, to match experimental data. Improved computational tools for exploring parameter space and fitting models to data are clearly needed. This paper describes techniques for computing periodic orbits in systems of hybrid differential-algebraic equations and parameter estimation methods for fitting these orbits to data. These techniques make extensive use of automatic differentiation to accurately and efficiently evaluate derivatives for time integration, parameter sensitivities, root finding and optimization. The boundary value problem representing a periodic orbit in a hybrid system of differential algebraic equations is discretized via multiple-shooting using a high-degree Taylor series integration method (GM00, Phi03). Numerical solutions to the shooting equations are then estimated by a Newton process yielding an approximate periodic orbit. A metric is defined for computing the distance

20. Nonlinear Parameter Estimation in Microbiological Degradation Systems and Statistic Test for Common Estimation

DEFF Research Database (Denmark)

Sommer, Helle Mølgaard; Holst, Helle; Spliid, Henrik

1995-01-01

Three identical microbiological experiments were carried out and analysed in order to examine the variability of the parameter estimates. The microbiological system consisted of a substrate (toluene) and a biomass (pure culture) mixed together in an aquifer medium. The degradation of the substrate...

1. Thermodynamic criteria for estimating the kinetic parameters of catalytic reactions

Science.gov (United States)

Mitrichev, I. I.; Zhensa, A. V.; Kol'tsova, E. M.

2017-01-01

Kinetic parameters are estimated using two criteria in addition to the traditional criterion that considers the consistency between experimental and modeled conversion data: thermodynamic consistency and the consistency with entropy production (i.e., the absolute rate of the change in entropy due to exchange with the environment is consistent with the rate of entropy production in the steady state). A special procedure is developed and executed on a computer to achieve the thermodynamic consistency of a set of kinetic parameters with respect to both the standard entropy of a reaction and the standard enthalpy of a reaction. A problem of multi-criterion optimization, reduced to a single-criterion problem by summing weighted values of the three criteria listed above, is solved. Using the reaction of NO reduction with CO on a platinum catalyst as an example, it is shown that the set of parameters proposed by D.B. Mantri and P. Aghalayam gives much worse agreement with experimental values than the set obtained on the basis of three criteria: the sum of the squares of deviations for conversion, the thermodynamic consistency, and the consistency with entropy production.

2. Estimation of Parameters of CCF with Staggered Testing

International Nuclear Information System (INIS)

Kim, Myung-Ki; Hong, Sung-Yull

2006-01-01

Common cause failures are extremely important in reliability analysis and would be dominant to risk contributor in a high reliable system such as a nuclear power plant. Of particular concern is common cause failure (CCF) that degrades redundancy or diversity implemented to improve a reliability of systems. Most of analyses of parameters of CCF models such as beta factor model, alpha factor model, and MGL(Multiple Greek Letters) model deal a system with a nonstaggered testing strategy. Non-staggered testing is that all components are tested at the same time (or at least the same shift) and staggered testing is that if there is a failure in the first component, all the other components are tested immediately, and if it succeeds, no more is done until the next scheduled testing time. Both of them are applied in the nuclear power plants. The strategy, however, is not explicitly described in the technical specifications, but implicitly in the periodic test procedure. For example, some redundant components particularly important to safety are being tested with staggered testing strategy. Others are being performed with non-staggered testing strategy. This paper presents the parameter estimator of CCF model such as beta factor model, MGL model, and alpha factor model with staggered testing strategy. In addition, a new CCF model, rho factor model, is proposed and its parameter is presented with staggered testing strategy

3. Estimation Parameters And Modelling Zero Inflated Negative Binomial

Directory of Open Access Journals (Sweden)

Cindy Cahyaning Astuti

2016-11-01

Full Text Available Regression analysis is used to determine relationship between one or several response variable (Y with one or several predictor variables (X. Regression model between predictor variables and the Poisson distributed response variable is called Poisson Regression Model. Since, Poisson Regression requires an equality between mean and variance, it is not appropriate to apply this model on overdispersion (variance is higher than mean. Poisson regression model is commonly used to analyze the count data. On the count data type, it is often to encounteredd some observations that have zero value with large proportion of zero value on the response variable (zero Inflation. Poisson regression can be used to analyze count data but it has not been able to solve problem of excess zero value on the response variable. An alternative model which is more suitable for overdispersion data and can solve the problem of excess zero value on the response variable is Zero Inflated Negative Binomial (ZINB. In this research, ZINB is applied on the case of Tetanus Neonatorum in East Java. The aim of this research is to examine the likelihood function and to form an algorithm to estimate the parameter of ZINB and also applying ZINB model in the case of Tetanus Neonatorum in East Java. Maximum Likelihood Estimation (MLE method is used to estimate the parameter on ZINB and the likelihood function is maximized using Expectation Maximization (EM algorithm. Test results of ZINB regression model showed that the predictor variable have a partial significant effect at negative binomial model is the percentage of pregnant women visits and the percentage of maternal health personnel assisted, while the predictor variables that have a partial significant effect at zero inflation model is the percentage of neonatus visits.

4. Gaussian process inference for estimating pharmacokinetic parameters of dynamic contrast-enhanced MR images.

Science.gov (United States)

Wang, Shijun; Liu, Peter; Turkbey, Baris; Choyke, Peter; Pinto, Peter; Summers, Ronald M

2012-01-01

In this paper, we propose a new pharmacokinetic model for parameter estimation of dynamic contrast-enhanced (DCE) MRI by using Gaussian process inference. Our model is based on the Tofts dual-compartment model for the description of tracer kinetics and the observed time series from DCE-MRI is treated as a Gaussian stochastic process. The parameter estimation is done through a maximum likelihood approach and we propose a variant of the coordinate descent method to solve this likelihood maximization problem. The new model was shown to outperform a baseline method on simulated data. Parametric maps generated on prostate DCE data with the new model also provided better enhancement of tumors, lower intensity on false positives, and better boundary delineation when compared with the baseline method. New statistical parameter maps from the process model were also found to be informative, particularly when paired with the PK parameter maps.

5. Parameter estimation of multivariate multiple regression model using bayesian with non-informative Jeffreys’ prior distribution

Science.gov (United States)

Saputro, D. R. S.; Amalia, F.; Widyaningsih, P.; Affan, R. C.

2018-05-01

Bayesian method is a method that can be used to estimate the parameters of multivariate multiple regression model. Bayesian method has two distributions, there are prior and posterior distributions. Posterior distribution is influenced by the selection of prior distribution. Jeffreys’ prior distribution is a kind of Non-informative prior distribution. This prior is used when the information about parameter not available. Non-informative Jeffreys’ prior distribution is combined with the sample information resulting the posterior distribution. Posterior distribution is used to estimate the parameter. The purposes of this research is to estimate the parameters of multivariate regression model using Bayesian method with Non-informative Jeffreys’ prior distribution. Based on the results and discussion, parameter estimation of β and Σ which were obtained from expected value of random variable of marginal posterior distribution function. The marginal posterior distributions for β and Σ are multivariate normal and inverse Wishart. However, in calculation of the expected value involving integral of a function which difficult to determine the value. Therefore, approach is needed by generating of random samples according to the posterior distribution characteristics of each parameter using Markov chain Monte Carlo (MCMC) Gibbs sampling algorithm.

6. Nonlinear systems time-varying parameter estimation: Application to induction motors

Energy Technology Data Exchange (ETDEWEB)

Kenne, Godpromesse [Laboratoire d' Automatique et d' Informatique Appliquee (LAIA), Departement de Genie Electrique, IUT FOTSO Victor, Universite de Dschang, B.P. 134 Bandjoun (Cameroon); Ahmed-Ali, Tarek [Ecole Nationale Superieure des Ingenieurs des Etudes et Techniques d' Armement (ENSIETA), 2 Rue Francois Verny, 29806 Brest Cedex 9 (France); Lamnabhi-Lagarrigue, F. [Laboratoire des Signaux et Systemes (L2S), C.N.R.S-SUPELEC, Universite Paris XI, 3 Rue Joliot Curie, 91192 Gif-sur-Yvette (France); Arzande, Amir [Departement Energie, Ecole Superieure d' Electricite-SUPELEC, 3 Rue Joliot Curie, 91192 Gif-sur-Yvette (France)

2008-11-15

In this paper, an algorithm for time-varying parameter estimation for a large class of nonlinear systems is presented. The proof of the convergence of the estimates to their true values is achieved using Lyapunov theories and does not require that the classical persistent excitation condition be satisfied by the input signal. Since the induction motor (IM) is widely used in several industrial sectors, the algorithm developed is potentially useful for adjusting the controller parameters of variable speed drives. The method proposed is simple and easily implementable in real-time. The application of this approach to on-line estimation of the rotor resistance of IM shows a rapidly converging estimate in spite of measurement noise, discretization effects, parameter uncertainties (e.g. inaccuracies on motor inductance values) and modeling inaccuracies. The robustness analysis for this IM application also revealed that the proposed scheme is insensitive to the stator resistance variations within a wide range. The merits of the proposed algorithm in the case of on-line time-varying rotor resistance estimation are demonstrated via experimental results in various operating conditions of the induction motor. The experimental results obtained demonstrate that the application of the proposed algorithm to update on-line the parameters of an adaptive controller (e.g. IM and synchronous machines adaptive control) can improve the efficiency of the industrial process. The other interesting features of the proposed method include fault detection/estimation and adaptive control of IM and synchronous machines. (author)

7. Mammalian Cell Culture Process for Monoclonal Antibody Production: Nonlinear Modelling and Parameter Estimation

Directory of Open Access Journals (Sweden)

Dan Selişteanu

2015-01-01

Full Text Available Monoclonal antibodies (mAbs are at present one of the fastest growing products of pharmaceutical industry, with widespread applications in biochemistry, biology, and medicine. The operation of mAbs production processes is predominantly based on empirical knowledge, the improvements being achieved by using trial-and-error experiments and precedent practices. The nonlinearity of these processes and the absence of suitable instrumentation require an enhanced modelling effort and modern kinetic parameter estimation strategies. The present work is dedicated to nonlinear dynamic modelling and parameter estimation for a mammalian cell culture process used for mAb production. By using a dynamical model of such kind of processes, an optimization-based technique for estimation of kinetic parameters in the model of mammalian cell culture process is developed. The estimation is achieved as a result of minimizing an error function by a particle swarm optimization (PSO algorithm. The proposed estimation approach is analyzed in this work by using a particular model of mammalian cell culture, as a case study, but is generic for this class of bioprocesses. The presented case study shows that the proposed parameter estimation technique provides a more accurate simulation of the experimentally observed process behaviour than reported in previous studies.

8. Improving filtering and prediction of spatially extended turbulent systems with model errors through stochastic parameter estimation

International Nuclear Information System (INIS)

Gershgorin, B.; Harlim, J.; Majda, A.J.

2010-01-01

The filtering and predictive skill for turbulent signals is often limited by the lack of information about the true dynamics of the system and by our inability to resolve the assumed dynamics with sufficiently high resolution using the current computing power. The standard approach is to use a simple yet rich family of constant parameters to account for model errors through parameterization. This approach can have significant skill by fitting the parameters to some statistical feature of the true signal; however in the context of real-time prediction, such a strategy performs poorly when intermittent transitions to instability occur. Alternatively, we need a set of dynamic parameters. One strategy for estimating parameters on the fly is a stochastic parameter estimation through partial observations of the true signal. In this paper, we extend our newly developed stochastic parameter estimation strategy, the Stochastic Parameterization Extended Kalman Filter (SPEKF), to filtering sparsely observed spatially extended turbulent systems which exhibit abrupt stability transition from time to time despite a stable average behavior. For our primary numerical example, we consider a turbulent system of externally forced barotropic Rossby waves with instability introduced through intermittent negative damping. We find high filtering skill of SPEKF applied to this toy model even in the case of very sparse observations (with only 15 out of the 105 grid points observed) and with unspecified external forcing and damping. Additive and multiplicative bias corrections are used to learn the unknown features of the true dynamics from observations. We also present a comprehensive study of predictive skill in the one-mode context including the robustness toward variation of stochastic parameters, imperfect initial conditions and finite ensemble effect. Furthermore, the proposed stochastic parameter estimation scheme applied to the same spatially extended Rossby wave system demonstrates

9. Colocated MIMO Radar: Beamforming, Waveform design, and Target Parameter Estimation

KAUST Repository

Jardak, Seifallah

2014-04-01

Thanks to its improved capabilities, the Multiple Input Multiple Output (MIMO) radar is attracting the attention of researchers and practitioners alike. Because it transmits orthogonal or partially correlated waveforms, this emerging technology outperformed the phased array radar by providing better parametric identifiability, achieving higher spatial resolution, and designing complex beampatterns. To avoid jamming and enhance the signal to noise ratio, it is often interesting to maximize the transmitted power in a given region of interest and minimize it elsewhere. This problem is known as the transmit beampattern design and is usually tackled as a two-step process: a transmit covariance matrix is firstly designed by minimizing a convex optimization problem, which is then used to generate practical waveforms. In this work, we propose simple novel methods to generate correlated waveforms using finite alphabet constant and non-constant-envelope symbols. To generate finite alphabet waveforms, the proposed method maps easily generated Gaussian random variables onto the phase-shift-keying, pulse-amplitude, and quadrature-amplitude modulation schemes. For such mapping, the probability density function of Gaussian random variables is divided into M regions, where M is the number of alphabets in the corresponding modulation scheme. By exploiting the mapping function, the relationship between the cross-correlation of Gaussian and finite alphabet symbols is derived. The second part of this thesis covers the topic of target parameter estimation. To determine the reflection coefficient, spatial location, and Doppler shift of a target, maximum likelihood estimation yields the best performance. However, it requires a two dimensional search problem. Therefore, its computational complexity is prohibitively high. So, we proposed a reduced complexity and optimum performance algorithm which allows the two dimensional fast Fourier transform to jointly estimate the spatial location

10. Estimation of the Alpha Factor Parameters Using the ICDE Database

Energy Technology Data Exchange (ETDEWEB)

Kang, Dae Il; Hwang, M. J.; Han, S. H

2007-04-15

Detailed common cause failure (CCF) analysis generally need for the data for CCF events of other nuclear power plants because the CCF events rarely occur. KAERI has been participated at the international common cause failure data exchange (ICDE) project to get the data for the CCF events. The operation office of the ICDE project sent the CCF event data for EDG to the KAERI at December 2006. As a pilot study, we performed the detailed CCF analysis of EDGs for Yonggwang Units 3 and 4 and Ulchin Units 3 and 4 using the ICDE database. There are two onsite EDGs for each NPP. When an offsite power and the two onsite EDGs are not available, one alternate AC (AAC) diesel generator (hereafter AAC) is provided. Two onsite EDGs and the AAC are manufactured by the same company, but they are designed differently. We estimated the Alpha Factor and the CCF probability for the cases where three EDGs were assumed to be identically designed, and for those were assumed to be not identically designed. For the cases where three EDGs were assumed to be identically designed, double CCF probabilities of Yonggwang Units 3/4 and Ulchin Units 3/4 for 'fails to start' were estimated as 2.20E-4 and 2.10E-4, respectively. Triple CCF probabilities of those were estimated as 2.39E-4 and 2.42E-4, respectively. As each NPP has no experience for 'fails to run', Yonggwang Units 3/4 and Ulchin Units 3/4 have the same CCF probability. The estimated double and triple CCF probabilities for 'fails to run' are 4.21E-4 and 4.61E-4, respectively. Quantification results show that the system unavailability for the cases where the three EDGs are identical is higher than that where the three EDGs are different. The estimated system unavailability of the former case was increased by 3.4% comparing with that of the latter. As a future study, a computerization work for the estimations of the CCF parameters will be performed.

11. Estimation of genetic parameters for reproductive traits in Shall sheep.

Science.gov (United States)

2013-06-01

The objective of this study was to estimate genetic parameters for reproductive traits in Shall sheep. Data included 1,316 records on reproductive performances of 395 Shall ewes from 41 sires and 136 dams which were collected from 2001 to 2007 in Shall breeding station in Qazvin province at the Northwest of Iran. Studied traits were litter size at birth (LSB), litter size at weaning (LSW), litter mean weight per lamb born (LMWLB), litter mean weight per lamb weaned (LMWLW), total litter weight at birth (TLWB), and total litter weight at weaning (TLWW). Test of significance to include fixed effects in the statistical model was performed using the general linear model procedure of SAS. The effects of lambing year and ewe age at lambing were significant (Psheep.

12. Multiphase flow parameter estimation based on laser scattering

Science.gov (United States)

Vendruscolo, Tiago P.; Fischer, Robert; Martelli, Cicero; Rodrigues, Rômulo L. P.; Morales, Rigoberto E. M.; da Silva, Marco J.

2015-07-01

The flow of multiple constituents inside a pipe or vessel, known as multiphase flow, is commonly found in many industry branches. The measurement of the individual flow rates in such flow is still a challenge, which usually requires a combination of several sensor types. However, in many applications, especially in industrial process control, it is not necessary to know the absolute flow rate of the respective phases, but rather to continuously monitor flow conditions in order to quickly detect deviations from the desired parameters. Here we show how a simple and low-cost sensor design can achieve this, by using machine-learning techniques to distinguishing the characteristic patterns of oblique laser light scattered at the phase interfaces. The sensor is capable of estimating individual phase fluxes (as well as their changes) in multiphase flows and may be applied to safety applications due to its quick response time.

13. Estimating Phenomenological Parameters in Multi-Assets Markets

Science.gov (United States)

Raffaelli, Giacomo; Marsili, Matteo

Financial correlations exhibit a non-trivial dynamic behavior. This is reproduced by a simple phenomenological model of a multi-asset financial market, which takes into account the impact of portfolio investment on price dynamics. This captures the fact that correlations determine the optimal portfolio but are affected by investment based on it. Such a feedback on correlations gives rise to an instability when the volume of investment exceeds a critical value. Close to the critical point the model exhibits dynamical correlations very similar to those observed in real markets. We discuss how the model's parameter can be estimated in real market data with a maximum likelihood principle. This confirms the main conclusion that real markets operate close to a dynamically unstable point.

14. Dynamic systems models new methods of parameter and state estimation

CERN Document Server

2016-01-01

This monograph is an exposition of a novel method for solving inverse problems, a method of parameter estimation for time series data collected from simulations of real experiments. These time series might be generated by measuring the dynamics of aircraft in flight, by the function of a hidden Markov model used in bioinformatics or speech recognition or when analyzing the dynamics of asset pricing provided by the nonlinear models of financial mathematics. Dynamic Systems Models demonstrates the use of algorithms based on polynomial approximation which have weaker requirements than already-popular iterative methods. Specifically, they do not require a first approximation of a root vector and they allow non-differentiable elements in the vector functions being approximated. The text covers all the points necessary for the understanding and use of polynomial approximation from the mathematical fundamentals, through algorithm development to the application of the method in, for instance, aeroplane flight dynamic...

15. Cosmological Parameter Estimation with Large Scale Structure Observations

CERN Document Server

Di Dio, Enea; Durrer, Ruth; Lesgourgues, Julien

2014-01-01

We estimate the sensitivity of future galaxy surveys to cosmological parameters, using the redshift dependent angular power spectra of galaxy number counts, $C_\\ell(z_1,z_2)$, calculated with all relativistic corrections at first order in perturbation theory. We pay special attention to the redshift dependence of the non-linearity scale and present Fisher matrix forecasts for Euclid-like and DES-like galaxy surveys. We compare the standard $P(k)$ analysis with the new $C_\\ell(z_1,z_2)$ method. We show that for surveys with photometric redshifts the new analysis performs significantly better than the $P(k)$ analysis. For spectroscopic redshifts, however, the large number of redshift bins which would be needed to fully profit from the redshift information, is severely limited by shot noise. We also identify surveys which can measure the lensing contribution and we study the monopole, $C_0(z_1,z_2)$.

16. Multiphase flow parameter estimation based on laser scattering

International Nuclear Information System (INIS)

Vendruscolo, Tiago P; Fischer, Robert; Martelli, Cicero; Da Silva, Marco J; Rodrigues, Rômulo L P; Morales, Rigoberto E M

2015-01-01

The flow of multiple constituents inside a pipe or vessel, known as multiphase flow, is commonly found in many industry branches. The measurement of the individual flow rates in such flow is still a challenge, which usually requires a combination of several sensor types. However, in many applications, especially in industrial process control, it is not necessary to know the absolute flow rate of the respective phases, but rather to continuously monitor flow conditions in order to quickly detect deviations from the desired parameters. Here we show how a simple and low-cost sensor design can achieve this, by using machine-learning techniques to distinguishing the characteristic patterns of oblique laser light scattered at the phase interfaces. The sensor is capable of estimating individual phase fluxes (as well as their changes) in multiphase flows and may be applied to safety applications due to its quick response time. (paper)

17. Review of methods for level density estimation from resonance parameters

International Nuclear Information System (INIS)

Froehner, F.H.

1983-01-01

A number of methods are available for statistical analysis of resonance parameter sets, i.e. for estimation of level densities and average widths with account of missing levels. The main categories are (i) methods based on theories of level spacings (orthogonal-ensemble theory, Dyson-Mehta statistics), (ii) methods based on comparison with simulated cross section curves (Monte Carlo simulation, Garrison's autocorrelation method), (iii) methods exploiting the observed neutron width distribution by means of Bayesian or more approximate procedures such as maximum-likelihood, least-squares or moment methods, with various recipes for the treatment of detection thresholds and resolution effects. The present review will concentrate on (iii) with the aim of clarifying the basic mathematical concepts and the relationship between the various techniques. Recent theoretical progress in the treatment of resolution effects, detectability thresholds and p-wave admixture is described. (Auth.)

18. Bayesian ensemble approach to error estimation of interatomic potentials

DEFF Research Database (Denmark)

Frederiksen, Søren Lund; Jacobsen, Karsten Wedel; Brown, K.S.

2004-01-01

Using a Bayesian approach a general method is developed to assess error bars on predictions made by models fitted to data. The error bars are estimated from fluctuations in ensembles of models sampling the model-parameter space with a probability density set by the minimum cost. The method...... is applied to the development of interatomic potentials for molybdenum using various potential forms and databases based on atomic forces. The calculated error bars on elastic constants, gamma-surface energies, structural energies, and dislocation properties are shown to provide realistic estimates...

19. Transient analysis of intercalation electrodes for parameter estimation

Science.gov (United States)

Devan, Sheba

An essential part of integrating batteries as power sources in any application, be it a large scale automotive application or a small scale portable application, is an efficient Battery Management System (BMS). The combination of a battery with the microprocessor based BMS (called "smart battery") helps prolong the life of the battery by operating in the optimal regime and provides accurate information regarding the battery to the end user. The main purposes of BMS are cell protection, monitoring and control, and communication between different components. These purposes are fulfilled by tracking the change in the parameters of the intercalation electrodes in the batteries. Consequently, the functions of the BMS should be prompt, which requires the methodology of extracting the parameters to be efficient in time. The traditional transient techniques applied so far may not be suitable due to reasons such as the inability to apply these techniques when the battery is under operation, long experimental time, etc. The primary aim of this research work is to design a fast, accurate and reliable technique that can be used to extract parameter values of the intercalation electrodes. A methodology based on analysis of the short time response to a sinusoidal input perturbation, in the time domain is demonstrated using a porous electrode model for an intercalation electrode. It is shown that the parameters associated with the interfacial processes occurring in the electrode can be determined rapidly, within a few milliseconds, by measuring the response in the transient region. The short time analysis in the time domain is then extended to a single particle model that involves bulk diffusion in the solid phase in addition to interfacial processes. A systematic procedure for sequential parameter estimation using sensitivity analysis is described. Further, the short time response and the input perturbation are transformed into the frequency domain using Fast Fourier Transform

20. Estimation of real-time runway surface contamination using flight data recorder parameters

Science.gov (United States)

Curry, Donovan

Within this research effort, the development of an analytic process for friction coefficient estimation is presented. Under static equilibrium, the sum of forces and moments acting on the aircraft, in the aircraft body coordinate system, while on the ground at any instant is equal to zero. Under this premise the longitudinal, lateral and normal forces due to landing are calculated along with the individual deceleration components existent when an aircraft comes to a rest during ground roll. In order to validate this hypothesis a six degree of freedom aircraft model had to be created and landing tests had to be simulated on different surfaces. The simulated aircraft model includes a high fidelity aerodynamic model, thrust model, landing gear model, friction model and antiskid model. Three main surfaces were defined in the friction model; dry, wet and snow/ice. Only the parameters recorded by an FDR are used directly from the aircraft model all others are estimated or known a priori. The estimation of unknown parameters is also presented in the research effort. With all needed parameters a comparison and validation with simulated and estimated data, under different runway conditions, is performed. Finally, this report presents results of a sensitivity analysis in order to provide a measure of reliability of the analytic estimation process. Linear and non-linear sensitivity analysis has been performed in order to quantify the level of uncertainty implicit in modeling estimated parameters and how they can affect the calculation of the instantaneous coefficient of friction. Using the approach of force and moment equilibrium about the CG at landing to reconstruct the instantaneous coefficient of friction appears to be a reasonably accurate estimate when compared to the simulated friction coefficient. This is also true when the FDR and estimated parameters are introduced to white noise and when crosswind is introduced to the simulation. After the linear analysis the

1. Project Parameter Estimation on the Basis of an Erp Database

Directory of Open Access Journals (Sweden)

Relich Marcin

2013-12-01

Full Text Available Nowadays, more and more enterprises are using Enterprise Resource Planning (EPR systems that can also be used to plan and control the development of new products. In order to obtain a project schedule, certain parameters (e.g. duration have to be specified in an ERP system. These parameters can be defined by the employees according to their knowledge, or can be estimated on the basis of data from previously completed projects. This paper investigates using an ERP database to identify those variables that have a significant influence on the duration of a project phase. In the paper, a model of knowledge discovery from an ERP database is proposed. The presented method contains four stages of the knowledge discovery process such as data selection, data transformation, data mining and interpretation of patterns in the context of new product development. Among data mining techniques, a fuzzy neural system is chosen to seek relationships on the basis of data from completed projects stored in an ERP system.

2. Estimation of fracture parameters using elastic full-waveform inversion

KAUST Repository

Zhang, Zhendong

2017-08-17

Current methodologies to characterize fractures at the reservoir scale have serious limitations in spatial resolution and suffer from uncertainties in the inverted parameters. Here, we propose to estimate the spatial distribution and physical properties of fractures using full-waveform inversion (FWI) of multicomponent surface seismic data. An effective orthorhombic medium with five clusters of vertical fractures distributed in a checkboard fashion is used to test the algorithm. A shape regularization term is added to the objective function to improve the estimation of the fracture azimuth, which is otherwise poorly constrained. The cracks are assumed to be penny-shaped to reduce the nonuniqueness in the inverted fracture weaknesses and achieve a faster convergence. To better understand the inversion results, we analyze the radiation patterns induced by the perturbations in the fracture weaknesses and orientation. Due to the high-resolution potential of elastic FWI, the developed algorithm can recover the spatial fracture distribution and identify localized “sweet spots” of intense fracturing. However, the fracture azimuth can be resolved only using long-offset data.

3. Customized Steady-State Constraints for Parameter Estimation in Non-Linear Ordinary Differential Equation Models.

Science.gov (United States)

Rosenblatt, Marcus; Timmer, Jens; Kaschek, Daniel

2016-01-01

Ordinary differential equation models have become a wide-spread approach to analyze dynamical systems and understand underlying mechanisms. Model parameters are often unknown and have to be estimated from experimental data, e.g., by maximum-likelihood estimation. In particular, models of biological systems contain a large number of parameters. To reduce the dimensionality of the parameter space, steady-state information is incorporated in the parameter estimation process. For non-linear models, analytical steady-state calculation typically leads to higher-order polynomial equations for which no closed-form solutions can be obtained. This can be circumvented by solving the steady-state equations for kinetic parameters, which results in a linear equation system with comparatively simple solutions. At the same time multiplicity of steady-state solutions is avoided, which otherwise is problematic for optimization. When solved for kinetic parameters, however, steady-state constraints tend to become negative for particular model specifications, thus, generating new types of optimization problems. Here, we present an algorithm based on graph theory that derives non-negative, analytical steady-state expressions by stepwise removal of cyclic dependencies between dynamical variables. The algorithm avoids multiple steady-state solutions by construction. We show that our method is applicable to most common classes of biochemical reaction networks containing inhibition terms, mass-action and Hill-type kinetic equations. Comparing the performance of parameter estimation for different analytical and numerical methods of incorporating steady-state information, we show that our approach is especially well-tailored to guarantee a high success rate of optimization.

4. Estimating variability in functional images using a synthetic resampling approach

International Nuclear Information System (INIS)

Maitra, R.; O'Sullivan, F.

1996-01-01

Functional imaging of biologic parameters like in vivo tissue metabolism is made possible by Positron Emission Tomography (PET). Many techniques, such as mixture analysis, have been suggested for extracting such images from dynamic sequences of reconstructed PET scans. Methods for assessing the variability in these functional images are of scientific interest. The nonlinearity of the methods used in the mixture analysis approach makes analytic formulae for estimating variability intractable. The usual resampling approach is infeasible because of the prohibitive computational effort in simulating a number of sinogram. datasets, applying image reconstruction, and generating parametric images for each replication. Here we introduce an approach that approximates the distribution of the reconstructed PET images by a Gaussian random field and generates synthetic realizations in the imaging domain. This eliminates the reconstruction steps in generating each simulated functional image and is therefore practical. Results of experiments done to evaluate the approach on a model one-dimensional problem are very encouraging. Post-processing of the estimated variances is seen to improve the accuracy of the estimation method. Mixture analysis is used to estimate functional images; however, the suggested approach is general enough to extend to other parametric imaging methods

5. A sensitivity analysis approach to optical parameters of scintillation detectors

International Nuclear Information System (INIS)

Ghal-Eh, N.; Koohi-Fayegh, R.

2008-01-01

In this study, an extended version of the Monte Carlo light transport code, PHOTRACK, has been used for a sensitivity analysis to estimate the importance of different wavelength-dependent parameters in the modelling of light collection process in scintillators

6. Facial motion parameter estimation and error criteria in model-based image coding

Science.gov (United States)

Liu, Yunhai; Yu, Lu; Yao, Qingdong

2000-04-01

Model-based image coding has been given extensive attention due to its high subject image quality and low bit-rates. But the estimation of object motion parameter is still a difficult problem, and there is not a proper error criteria for the quality assessment that are consistent with visual properties. This paper presents an algorithm of the facial motion parameter estimation based on feature point correspondence and gives the motion parameter error criteria. The facial motion model comprises of three parts. The first part is the global 3-D rigid motion of the head, the second part is non-rigid translation motion in jaw area, and the third part consists of local non-rigid expression motion in eyes and mouth areas. The feature points are automatically selected by a function of edges, brightness and end-node outside the blocks of eyes and mouth. The numbers of feature point are adjusted adaptively. The jaw translation motion is tracked by the changes of the feature point position of jaw. The areas of non-rigid expression motion can be rebuilt by using block-pasting method. The estimation approach of motion parameter error based on the quality of reconstructed image is suggested, and area error function and the error function of contour transition-turn rate are used to be quality criteria. The criteria reflect the image geometric distortion caused by the error of estimated motion parameters properly.

7. Estimation of genetic parameters for reproductive traits in alpacas.

Science.gov (United States)

Cruz, A; Cervantes, I; Burgos, A; Morante, R; Gutiérrez, J P

2015-12-01

One of the main deficiencies affecting animal breeding programs in Peruvian alpacas is the low reproductive performance leading to low number of animals available to select from, decreasing strongly the selection intensity. Some reproductive traits could be improved by artificial selection, but very few information about genetic parameters exists for these traits in this specie. The aim of this study was to estimate genetic parameters for six reproductive traits in alpacas both in Suri (SU) and Huacaya (HU) ecotypes, as well as their genetic relationship with fiber and morphological traits. Dataset belonging to Pacomarca experimental farm collected between 2000 and 2014 was used. Number of records for age at first service (AFS), age at first calving (AFC), copulation time (CT), pregnancy diagnosis (PD), gestation length (GL), and calving interval (CI) were, respectively, 1704, 854, 19,770, 5874, 4290 and 934. Pedigree consisted of 7742 animals. Regarding reproductive traits, model of analysis included additive and residual random effects for all traits, and also permanent environmental effect for CT, PD, GL and CI traits, with color and year of recording as fixed effects for all the reproductive traits and also age at mating and sex of calf for GL trait. Estimated heritabilities, respectively for HU and SU were 0.19 and 0.09 for AFS, 0.45 and 0.59 for AFC, 0.04 and 0.05 for CT, 0.07 and 0.05 for PD, 0.12 and 0.20 for GL, and 0.14 and 0.09 for CI. Genetic correlations between them ranged from -0.96 to 0.70. No important genetic correlations were found between reproductive traits and fiber or morphological traits in HU. However, some moderate favorable genetic correlations were found between reproductive and either fiber and morphological traits in SU. According to estimated genetic correlations, some reproductive traits might be included as additional selection criteria in HU. Copyright © 2015 Elsevier B.V. All rights reserved.

8. Investigating Separate and Concurrent Approaches for Item Parameter Drift in 3PL Item Response Theory Equating

Science.gov (United States)

Arce-Ferrer, Alvaro J.; Bulut, Okan

2017-01-01

This study examines separate and concurrent approaches to combine the detection of item parameter drift (IPD) and the estimation of scale transformation coefficients in the context of the common item nonequivalent groups design with the three-parameter item response theory equating. The study uses real and synthetic data sets to compare the two…

9. Choice of the parameters of the cusum algorithms for parameter estimation in the markov modulated poisson process

OpenAIRE

Burkatovskaya, Yuliya Borisovna; Kabanova, T.; Khaustov, Pavel Aleksandrovich

2016-01-01

CUSUM algorithm for controlling chain state switching in the Markov modulated Poissonprocess was investigated via simulation. Recommendations concerning the parameter choice were givensubject to characteristics of the process. Procedure of the process parameter estimation was described.

10. BWR level estimation using Kalman Filtering approach

International Nuclear Information System (INIS)

Garner, G.; Divakaruni, S.M.; Meyer, J.E.

1986-01-01

Work is in progress on development of a system for Boiling Water Reactor (BWR) vessel level validation and failure detection. The levels validated include the liquid level both inside and outside the core shroud. This work is a major part of a larger effort to develop a complete system for BWR signal validation. The demonstration plant is the Oyster Creek BWR. Liquid level inside the core shroud is not directly measured during full power operation. This level must be validated using measurements of other quantities and analytic models. Given the available sensors, analytic models for level that are based on mass and energy balances can contain open integrators. When such a model is driven by noisy measurements, the model predicted level will deviate from the true level over time. To validate the level properly and to avoid false alarms, the open integrator must be stabilized. In addition, plant parameters will change slowly with time. The respective model must either account for these plant changes or be insensitive to them to avoid false alarms and maintain sensitivity to true failures of level instrumentation. Problems are addressed here by combining the extended Kalman Filter and Parity Space Decision/Estimator. The open integrator is stabilized by integrating from the validated estimate at the beginning of each sampling interval, rather than from the model predicted value. The model is adapted to slow plant/sensor changes by updating model parameters on-line

11. Bioinspired Computational Approach to Missing Value Estimation

Directory of Open Access Journals (Sweden)

2018-01-01

Full Text Available Missing data occurs when values of variables in a dataset are not stored. Estimating these missing values is a significant step during the data cleansing phase of a big data management approach. The reason of missing data may be due to nonresponse or omitted entries. If these missing data are not handled properly, this may create inaccurate results during data analysis. Although a traditional method such as maximum likelihood method extrapolates missing values, this paper proposes a bioinspired method based on the behavior of birds, specifically the Kestrel bird. This paper describes the behavior and characteristics of the Kestrel bird, a bioinspired approach, in modeling an algorithm to estimate missing values. The proposed algorithm (KSA was compared with WSAMP, Firefly, and BAT algorithm. The results were evaluated using the mean of absolute error (MAE. A statistical test (Wilcoxon signed-rank test and Friedman test was conducted to test the performance of the algorithms. The results of Wilcoxon test indicate that time does not have a significant effect on the performance, and the quality of estimation between the paired algorithms was significant; the results of Friedman test ranked KSA as the best evolutionary algorithm.

12. Improving Distribution Resiliency with Microgrids and State and Parameter Estimation

Energy Technology Data Exchange (ETDEWEB)

Tuffner, Francis K. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Williams, Tess L. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Schneider, Kevin P. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Elizondo, Marcelo A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Sun, Yannan [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Liu, Chen-Ching [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Xu, Yin [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Gourisetti, Sri Nikhil Gup [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

2015-09-30

Modern society relies on low-cost reliable electrical power, both to maintain industry, as well as provide basic social services to the populace. When major disturbances occur, such as Hurricane Katrina or Hurricane Sandy, the nation’s electrical infrastructure can experience significant outages. To help prevent the spread of these outages, as well as facilitating faster restoration after an outage, various aspects of improving the resiliency of the power system are needed. Two such approaches are breaking the system into smaller microgrid sections, and to have improved insight into the operations to detect failures or mis-operations before they become critical. Breaking the system into smaller sections of microgrid islands, power can be maintained in smaller areas where distribution generation and energy storage resources are still available, but bulk power generation is no longer connected. Additionally, microgrid systems can maintain service to local pockets of customers when there has been extensive damage to the local distribution system. However, microgrids are grid connected a majority of the time and implementing and operating a microgrid is much different than when islanded. This report discusses work conducted by the Pacific Northwest National Laboratory that developed improvements for simulation tools to capture the characteristics of microgrids and how they can be used to develop new operational strategies. These operational strategies reduce the cost of microgrid operation and increase the reliability and resilience of the nation’s electricity infrastructure. In addition to the ability to break the system into microgrids, improved observability into the state of the distribution grid can make the power system more resilient. State estimation on the transmission system already provides great insight into grid operations and detecting abnormal conditions by leveraging existing measurements. These transmission-level approaches are expanded to using

13. Data adaptive control parameter estimation for scaling laws

Energy Technology Data Exchange (ETDEWEB)

Dinklage, Andreas [Max-Planck-Institut fuer Plasmaphysik, Teilinstitut Greifswald, Wendelsteinstrasse 1, D-17491 Greifswald (Germany); Dose, Volker [Max-Planck- Institut fuer Plasmaphysik, Boltzmannstrasse 2, D-85748 Garching (Germany)

2007-07-01

Bayesian experimental design quantifies the utility of data expressed by the information gain. Data adaptive exploration determines the expected utility of a single new measurement using existing data and a data descriptive model. In other words, the method can be used for experimental planning. As an example for a multivariate linear case, we apply this method for constituting scaling laws of fusion devices. In detail, the scaling of the stellarator W7-AS is examined for a subset of {iota}=1/3 data. The impact of the existing data on the scaling exponents is presented. Furthermore, in control parameter space regions of high utility are identified which improve the accuracy of the scaling law. This approach is not restricted to the presented example only, but can also be extended to non-linear models.

14. Estimation of common cause failure parameters for diesel generators

International Nuclear Information System (INIS)

Tirira, J.; Lanore, J.M.

2002-10-01

This paper presents a summary of some results concerning the feedback analysis of French Emergency diesel generator (EDG). The database of common cause failure for EDG has been updated. The data collected covers a period of 10 years. Several latent common cause failure (CCF) events counting in tens are identified. In fact, in this number of events collected, most are potential CCF. From events identified, 15% events are characterized as complete CCF. The database is organised following the structure proposed by 'International Common Cause Data Exchange' (ICDE project). Events collected are analyzed by failure mode and degree of failure. Qualitative analysis of root causes, coupling factors and corrective actions are studied. The exercise of quantitative analysis is in progress for evaluating CCF parameters taking into account the average impact vector and the rate of the independent failures. The interest of the average impact vector approach is that it makes it possible to take into account a wide experience feedback, not limited to complete CCF but including also many events related to partial or potential CCF. It has to be noted that there are no finalized quantitative conclusions yet to be drawn and analysis is in progress for evaluating diesel CCF parameters. In fact, the numerical coding CCF representation of the events uses a part of subjective analysis, which requests a complete and detailed event examination. (authors)

15. Catalytic hydrolysis of ammonia borane: Intrinsic parameter estimation and validation

Energy Technology Data Exchange (ETDEWEB)

Basu, S.; Gore, J.P. [School of Mechanical Engineering, Purdue University, West Lafayette, IN 47907-2088 (United States); School of Chemical Engineering, Purdue University, West Lafayette, IN 47907-2100 (United States); Energy Center in Discovery Park, Purdue University, West Lafayette, IN 47907-2022 (United States); Zheng, Y. [School of Mechanical Engineering, Purdue University, West Lafayette, IN 47907-2088 (United States); Energy Center in Discovery Park, Purdue University, West Lafayette, IN 47907-2022 (United States); Varma, A.; Delgass, W.N. [School of Chemical Engineering, Purdue University, West Lafayette, IN 47907-2100 (United States); Energy Center in Discovery Park, Purdue University, West Lafayette, IN 47907-2022 (United States)

2010-04-02

Ammonia borane (AB) hydrolysis is a potential process for on-board hydrogen generation. This paper presents isothermal hydrogen release rate measurements of dilute AB (1 wt%) hydrolysis in the presence of carbon supported ruthenium catalyst (Ru/C). The ranges of investigated catalyst particle sizes and temperature were 20-181 {mu}m and 26-56 C, respectively. The obtained rate data included both kinetic and diffusion-controlled regimes, where the latter was evaluated using the catalyst effectiveness approach. A Langmuir-Hinshelwood kinetic model was adopted to interpret the data, with intrinsic kinetic and diffusion parameters determined by a nonlinear fitting algorithm. The AB hydrolysis was found to have an activation energy 60.4 kJ mol{sup -1}, pre-exponential factor 1.36 x 10{sup 10} mol (kg-cat){sup -1} s{sup -1}, adsorption energy -32.5 kJ mol{sup -1}, and effective mass diffusion coefficient 2 x 10{sup -10} m{sup 2} s{sup -1}. These parameters, obtained under dilute AB conditions, were validated by comparing measurements with simulations of AB consumption rates during the hydrolysis of concentrated AB solutions (5-20 wt%), and also with the axial temperature distribution in a 0.5 kW continuous-flow packed-bed reactor. (author)

16. Genetic parameter estimation of reproductive traits of Litopenaeus vannamei

Science.gov (United States)

Tan, Jian; Kong, Jie; Cao, Baoxiang; Luo, Kun; Liu, Ning; Meng, Xianhong; Xu, Shengyu; Guo, Zhaojia; Chen, Guoliang; Luan, Sheng

2017-02-01

In this study, the heritability, repeatability, phenotypic correlation, and genetic correlation of the reproductive and growth traits of L. vannamei were investigated and estimated. Eight traits of 385 shrimps from forty-two families, including the number of eggs (EN), number of nauplii (NN), egg diameter (ED), spawning frequency (SF), spawning success (SS), female body weight (BW) and body length (BL) at insemination, and condition factor (K), were measured,. A total of 519 spawning records including multiple spawning and 91 no spawning records were collected. The genetic parameters were estimated using an animal model, a multinomial logit model (for SF), and a sire-dam and probit model (for SS). Because there were repeated records, permanent environmental effects were included in the models. The heritability estimates for BW, BL, EN, NN, ED, SF, SS, and K were 0.49 ± 0.14, 0.51 ± 0.14, 0.12 ± 0.08, 0, 0.01 ± 0.04, 0.06 ± 0.06, 0.18 ± 0.07, and 0.10 ± 0.06, respectively. The genetic correlation was 0.99 ± 0.01 between BW and BL, 0.90 ± 0.19 between BW and EN, 0.22 ± 0.97 between BW and ED, -0.77 ± 1.14 between EN and ED, and -0.27 ± 0.36 between BW and K. The heritability of EN estimated without a covariate was 0.12 ± 0.08, and the genetic correlation was 0.90 ± 0.19 between BW and EN, indicating that improving BW may be used in selection programs to genetically improve the reproductive output of L. vannamei during the breeding. For EN, the data were also analyzed using body weight as a covariate (EN-2). The heritability of EN-2 was 0.03 ± 0.05, indicating that it is difficult to improve the reproductive output by genetic improvement. Furthermore, excessive pursuit of this selection is often at the expense of growth speed. Therefore, the selection of high-performance spawners using BW and SS may be an important strategy to improve nauplii production.

17. A termination criterion for parameter estimation in stochastic models in systems biology.

Science.gov (United States)

Zimmer, Christoph; Sahle, Sven

2015-11-01

Parameter estimation procedures are a central aspect of modeling approaches in systems biology. They are often computationally expensive, especially when the models take stochasticity into account. Typically parameter estimation involves the iterative optimization of an objective function that describes how well the model fits some measured data with a certain set of parameter values. In order to limit the computational expenses it is therefore important to apply an adequate stopping criterion for the optimization process, so that the optimization continues at least until a reasonable fit is obtained, but not much longer. In the case of stochastic modeling, at least some parameter estimation schemes involve an objective function that is itself a random variable. This means that plain convergence tests are not a priori suitable as stopping criteria. This article suggests a termination criterion suited to optimization problems in parameter estimation arising from stochastic models in systems biology. The termination criterion is developed for optimization algorithms that involve populations of parameter sets, such as particle swarm or evolutionary algorithms. It is based on comparing the variance of the objective function over the whole population of parameter sets with the variance of repeated evaluations of the objective function at the best parameter set. The performance is demonstrated for several different algorithms. To test the termination criterion we choose polynomial test functions as well as systems biology models such as an Immigration-Death model and a bistable genetic toggle switch. The genetic toggle switch is an especially challenging test case as it shows a stochastic switching between two steady states which is qualitatively different from the model behavior in a deterministic model. Copyright © 2015. Published by Elsevier Ireland Ltd.

18. Markov Chain Monte Carlo (MCMC) methods for parameter estimation of a novel hybrid redundant robot

International Nuclear Information System (INIS)

Wang Yongbo; Wu Huapeng; Handroos, Heikki

2011-01-01

This paper presents a statistical method for the calibration of a redundantly actuated hybrid serial-parallel robot IWR (Intersector Welding Robot). The robot under study will be used to carry out welding, machining, and remote handing for the assembly of vacuum vessel of International Thermonuclear Experimental Reactor (ITER). The robot has ten degrees of freedom (DOF), among which six DOF are contributed by the parallel mechanism and the rest are from the serial mechanism. In this paper, a kinematic error model which involves 54 unknown geometrical error parameters is developed for the proposed robot. Based on this error model, the mean values of the unknown parameters are statistically analyzed and estimated by means of Markov Chain Monte Carlo (MCMC) approach. The computer simulation is conducted by introducing random geometric errors and measurement poses which represent the corresponding real physical behaviors. The simulation results of the marginal posterior distributions of the estimated model parameters indicate that our method is reliable and robust.

19. State and parameter estimation in nonlinear systems as an optimal tracking problem

International Nuclear Information System (INIS)

Creveling, Daniel R.; Gill, Philip E.; Abarbanel, Henry D.I.

2008-01-01

In verifying and validating models of nonlinear processes it is important to incorporate information from observations in an efficient manner. Using the idea of synchronization of nonlinear dynamical systems, we present a framework for connecting a data signal with a model in a way that minimizes the required coupling yet allows the estimation of unknown parameters in the model. The need to evaluate unknown parameters in models of nonlinear physical, biophysical, and engineering systems occurs throughout the development of phenomenological or reduced models of dynamics. Our approach builds on existing work that uses synchronization as a tool for parameter estimation. We address some of the critical issues in that work and provide a practical framework for finding an accurate solution. In particular, we show the equivalence of this problem to that of tracking within an optimal control framework. This equivalence allows the application of powerful numerical methods that provide robust practical tools for model development and validation

20. Parameter estimation of a delay dynamical system using synchronization in presence of noise

International Nuclear Information System (INIS)

Rakshit, Biswambhar; Chowdhury, A. Roy; Saha, Papri

2007-01-01

A method of parameter estimation of a time delay chaotic system through synchronization is discussed. It is assumed that the observed data can always be effected with some white Gaussian noise. A least square approach is used to derive a system of differential equations which governs the temporal evolution of the parameters. These system of equations together with the coupled delay dynamical systems, when integrated, leads to asymptotic convergence to the value of the parameter along with synchronization of the two system variables. This method is quite effective for estimating the delay time which is an important characteristic feature of a delay dynamical system. The procedure is quite robust in the presence of noise

1. Non-Cooperative Target Imaging and Parameter Estimation with Narrowband Radar Echoes

Directory of Open Access Journals (Sweden)

Chun-mao Yeh

2016-01-01

Full Text Available This study focuses on the rotating target imaging and parameter estimation with narrowband radar echoes, which is essential for radar target recognition. First, a two-dimensional (2D imaging model with narrowband echoes is established in this paper, and two images of the target are formed on the velocity-acceleration plane at two neighboring coherent processing intervals (CPIs. Then, the rotating velocity (RV is proposed to be estimated by utilizing the relationship between the positions of the scattering centers among two images. Finally, the target image is rescaled to the range-cross-range plane with the estimated rotational parameter. The validity of the proposed approach is confirmed using numerical simulations.

2. Covariance-Based Estimation from Multisensor Delayed Measurements with Random Parameter Matrices and Correlated Noises

Directory of Open Access Journals (Sweden)

R. Caballero-Águila

2014-01-01

Full Text Available The optimal least-squares linear estimation problem is addressed for a class of discrete-time multisensor linear stochastic systems subject to randomly delayed measurements with different delay rates. For each sensor, a different binary sequence is used to model the delay process. The measured outputs are perturbed by both random parameter matrices and one-step autocorrelated and cross correlated noises. Using an innovation approach, computationally simple recursive algorithms are obtained for the prediction, filtering, and smoothing problems, without requiring full knowledge of the state-space model generating the signal process, but only the information provided by the delay probabilities and the mean and covariance functions of the processes (signal, random parameter matrices, and noises involved in the observation model. The accuracy of the estimators is measured by their error covariance matrices, which allow us to analyze the estimator performance in a numerical simulation example that illustrates the feasibility of the proposed algorithms.

3. Fault estimation - A standard problem approach

DEFF Research Database (Denmark)

Stoustrup, J.; Niemann, Hans Henrik

2002-01-01

This paper presents a range of optimization based approaches to fault diagnosis. A variety of fault diagnosis problems are reformulated in the so-called standard problem set-up introduced in the literature on robust control. Once the standard problem formulations are given, the fault diagnosis...... problems can be solved by standard optimization techniques. The proposed methods include (1) fault diagnosis (fault estimation, (FE)) for systems with model uncertainties; FE for systems with parametric faults, and FE for a class of nonlinear systems. Copyright...

4. Signal recognition and parameter estimation of BPSK-LFM combined modulation

Science.gov (United States)

Long, Chao; Zhang, Lin; Liu, Yu

2015-07-01

Intra-pulse analysis plays an important role in electronic warfare. Intra-pulse feature abstraction focuses on primary parameters such as instantaneous frequency, modulation, and symbol rate. In this paper, automatic modulation recognition and feature extraction for combined BPSK-LFM modulation signals based on decision theoretic approach is studied. The simulation results show good recognition effect and high estimation precision, and the system is easy to be realized.

5. Parameter estimation for stiff deterministic dynamical systems via ensemble Kalman filter

International Nuclear Information System (INIS)

Arnold, Andrea; Calvetti, Daniela; Somersalo, Erkki

2014-01-01

A commonly encountered problem in numerous areas of applications is to estimate the unknown coefficients of a dynamical system from direct or indirect observations at discrete times of some of the components of the state vector. A related problem is to estimate unobserved components of the state. An egregious example of such a problem is provided by metabolic models, in which the numerous model parameters and the concentrations of the metabolites in tissue are to be estimated from concentration data in the blood. A popular method for addressing similar questions in stochastic and turbulent dynamics is the ensemble Kalman filter (EnKF), a particle-based filtering method that generalizes classical Kalman filtering. In this work, we adapt the EnKF algorithm for deterministic systems in which the numerical approximation error is interpreted as a stochastic drift with variance based on classical error estimates of numerical integrators. This approach, which is particularly suitable for stiff systems where the stiffness may depend on the parameters, allows us to effectively exploit the parallel nature of particle methods. Moreover, we demonstrate how spatial prior information about the state vector, which helps the stability of the computed solution, can be incorporated into the filter. The viability of the approach is shown by computed examples, including a metabolic system modeling an ischemic episode in skeletal muscle, with a high number of unknown parameters. (paper)

6. A Central Composite Face-Centered Design for Parameters Estimation of PEM Fuel Cell Electrochemical Model

Directory of Open Access Journals (Sweden)

Khaled MAMMAR

2013-11-01

Full Text Available In this paper, a new approach based on Experimental of design methodology (DoE is used to estimate the optimal of unknown model parameters proton exchange membrane fuel cell (PEMFC. This proposed approach combines the central composite face-centered (CCF and numerical PEMFC electrochemical. Simulation results obtained using electrochemical model help to predict the cell voltage in terms of inlet partial pressures of hydrogen and oxygen, stack temperature, and operating current. The value of the previous model and (CCF design methodology is used for parametric analysis of electrochemical model. Thus it is possible to evaluate the relative importance of each parameter to the simulation accuracy. However this methodology is able to define the exact values of the parameters from the manufacture data. It was tested for the BCS 500-W stack PEM Generator, a stack rated at 500 W, manufactured by American Company BCS Technologies FC.

7. Modulating Function-Based Method for Parameter and Source Estimation of Partial Differential Equations

KAUST Repository

Asiri, Sharefa M.

2017-10-08

Partial Differential Equations (PDEs) are commonly used to model complex systems that arise for example in biology, engineering, chemistry, and elsewhere. The parameters (or coefficients) and the source of PDE models are often unknown and are estimated from available measurements. Despite its importance, solving the estimation problem is mathematically and numerically challenging and especially when the measurements are corrupted by noise, which is often the case. Various methods have been proposed to solve estimation problems in PDEs which can be classified into optimization methods and recursive methods. The optimization methods are usually heavy computationally, especially when the number of unknowns is large. In addition, they are sensitive to the initial guess and stop condition, and they suffer from the lack of robustness to noise. Recursive methods, such as observer-based approaches, are limited by their dependence on some structural properties such as observability and identifiability which might be lost when approximating the PDE numerically. Moreover, most of these methods provide asymptotic estimates which might not be useful for control applications for example. An alternative non-asymptotic approach with less computational burden has been proposed in engineering fields based on the so-called modulating functions. In this dissertation, we propose to mathematically and numerically analyze the modulating functions based approaches. We also propose to extend these approaches to different situations. The contributions of this thesis are as follows. (i) Provide a mathematical analysis of the modulating function-based method (MFBM) which includes: its well-posedness, statistical properties, and estimation errors. (ii) Provide a numerical analysis of the MFBM through some estimation problems, and study the sensitivity of the method to the modulating functions\\' parameters. (iii) Propose an effective algorithm for selecting the method\\'s design parameters

8. Determination of power system component parameters using nonlinear dead beat estimation method

Science.gov (United States)

Kolluru, Lakshmi

Power systems are considered the most complex man-made wonders in existence today. In order to effectively supply the ever increasing demands of the consumers, power systems are required to remain stable at all times. Stability and monitoring of these complex systems are achieved by strategically placed computerized control centers. State and parameter estimation is an integral part of these facilities, as they deal with identifying the unknown states and/or parameters of the systems. Advancements in measurement technologies and the introduction of phasor measurement units (PMU) provide detailed and dynamic information of all measurements. Accurate availability of dynamic measurements provides engineers the opportunity to expand and explore various possibilities in power system dynamic analysis/control. This thesis discusses the development of a parameter determination algorithm for nonlinear power systems, using dynamic data obtained from local measurements. The proposed algorithm was developed by observing the dead beat estimator used in state space estimation of linear systems. The dead beat estimator is considered to be very effective as it is capable of obtaining the required results in a fixed number of steps. The number of steps required is related to the order of the system and the number of parameters to be estimated. The proposed algorithm uses the idea of dead beat estimator and nonlinear finite difference methods to create an algorithm which is user friendly and can determine the parameters fairly accurately and effectively. The proposed algorithm is based on a deterministic approach, which uses dynamic data and mathematical models of power system components to determine the unknown parameters. The effectiveness of the algorithm is tested by implementing it to identify the unknown parameters of a synchronous machine. MATLAB environment is used to create three test cases for dynamic analysis of the system with assumed known parameters. Faults are

9. Estimation of Spectral Exponent Parameter of 1/f Process in Additive White Background Noise

Directory of Open Access Journals (Sweden)

Semih Ergintav

2007-01-01

Full Text Available An extension to the wavelet-based method for the estimation of the spectral exponent, γ, in a 1/fγ process and in the presence of additive white noise is proposed. The approach is based on eliminating the effect of white noise by a simple difference operation constructed on the wavelet spectrum. The γ parameter is estimated as the slope of a linear function. It is shown by simulations that the proposed method gives reliable results. Global positioning system (GPS time-series noise is analyzed and the results provide experimental verification of the proposed method.

10. Key Parameters Estimation and Adaptive Warning Strategy for Rear-End Collision of Vehicle

Directory of Open Access Journals (Sweden)

Xiang Song

2015-01-01

11. Health Parameter Estimation with Second-Order Sliding Mode Observer for a Turbofan Engine

Directory of Open Access Journals (Sweden)

Xiaodong Chang

2017-07-01

Full Text Available In this paper the problem of health parameter estimation in an aero-engine is investigated by using an unknown input observer-based methodology, implemented by a second-order sliding mode observer (SOSMO. Unlike the conventional state estimator-based schemes, such as Kalman filters (KF and sliding mode observers (SMO, the proposed scheme uses a “reconstruction signal” to estimate health parameters modeled as artificial inputs, and is not only applicable to long-time health degradation, but reacts much quicker in handling abrupt fault cases. In view of the inevitable uncertainties in engine dynamics and modeling, a weighting matrix is created to minimize such effect on estimation by using the linear matrix inequalities (LMI. A big step toward uncertainty modeling is taken compared with our previous SMO-based work, in that uncertainties are considered in a more practical form. Moreover, to avoid chattering in sliding modes, the super-twisting algorithm (STA is employed in observer design. Various simulations are carried out, based on the comparisons between the KF-based scheme, the SMO-based scheme in our earlier research, and the proposed method. The results consistently demonstrate the capabilities and advantages of the proposed approach in health parameter estimation.

12. Recovery of Graded Response Model Parameters: A Comparison of Marginal Maximum Likelihood and Markov Chain Monte Carlo Estimation

Science.gov (United States)

Kieftenbeld, Vincent; Natesan, Prathiba

2012-01-01

Markov chain Monte Carlo (MCMC) methods enable a fully Bayesian approach to parameter estimation of item response models. In this simulation study, the authors compared the recovery of graded response model parameters using marginal maximum likelihood (MML) and Gibbs sampling (MCMC) under various latent trait distributions, test lengths, and…

13. Uncertanity Analysis in Parameter Estimation of Coupled Bacteria-Sediment Fate and Transport in Streams

Science.gov (United States)

Massoudieh, A.; Le, T.; Pachepsky, Y. A.

2014-12-01

E. coli is widely used as an fecal indicator bacteria in streams. It has been shown that the interaction between sediments and the bacteria is an important factor in determining its fate and transport in water bodies. In this presentation parameter estimation and uncertainty analysis of a mechanistic model of bacteria-sediment interaction respectively using a hybrid genetic algorithm and Makov-Chain Monte Carlo (MCMC) approach will be presented. The physically-based model considers the advective-dispersive transport of sediments as well as both free-floating and sediment-associated bacteria in the water column and also the fate and transport of bacteria in the bed sediments in a small stream. The bed sediments are treated as a distributed system which allows modeling the evolution of the vertical distribution of bacteria as a result of sedimentation and resuspension, diffusion and bioturbation in the sediments. One-dimensional St. Venant's equation is used to model flow in the stream. The model is applied to sediment and E. coli concentration data collected during a high flow event in a small stream historically receiving agricultural runoff. Measured total suspended sediments and total E. coli concentrations in the water column at three sections of the stream are used for the parameter estimation. The data on the initial distribution of E. coli in the sediments was available and was used as the initial conditions. The MCMC method is used to estimate the joint probability distribution of model parameters including sediment deposition and erosion rates, critical shear stress for deposition and erosion, attachment and detachment rate constants of E. coli to/from sediments and also the effective diffusion coefficients of E. coli in the bed sediments. The uncertainties associated with the estimated parameters are quantified via the MCMC approach and the correlation between the posterior distribution of parameters have been used to assess the model adequacy and

14. Optimal Input Design for Aircraft Parameter Estimation using Dynamic Programming Principles

Science.gov (United States)

1990-01-01

A new technique was developed for designing optimal flight test inputs for aircraft parameter estimation experiments. The principles of dynamic programming were used for the design in the time domain. This approach made it possible to include realistic practical constraints on the input and output variables. A description of the new approach is presented, followed by an example for a multiple input linear model describing the lateral dynamics of a fighter aircraft. The optimal input designs produced by the new technique demonstrated improved quality and expanded capability relative to the conventional multiple input design method.

15. Estimation of uranium migration parameters in sandstone aquifers.

Science.gov (United States)

Malov, A I

2016-03-01

The chemical composition and isotopes of carbon and uranium were investigated in groundwater samples that were collected from 16 wells and 2 sources in the Northern Dvina Basin, Northwest Russia. Across the dataset, the temperatures in the groundwater ranged from 3.6 to 6.9 °C, the pH ranged from 7.6 to 9.0, the Eh ranged from -137 to +128 mV, the total dissolved solids (TDS) ranged from 209 to 22,000 mg L(-1), and the dissolved oxygen (DO) ranged from 0 to 9.9 ppm. The (14)C activity ranged from 0 to 69.96 ± 0.69 percent modern carbon (pmC). The uranium content in the groundwater ranged from 0.006 to 16 ppb, and the (234)U:(238)U activity ratio ranged from 1.35 ± 0.21 to 8.61 ± 1.35. The uranium concentration and (234)U:(238)U activity ratio increased from the recharge area to the redox barrier; behind the barrier, the uranium content is minimal. The results were systematized by creating a conceptual model of the Northern Dvina Basin's hydrogeological system. The use of uranium isotope dating in conjunction with radiocarbon dating allowed the determination of important water-rock interaction parameters, such as the dissolution rate:recoil loss factor ratio Rd:p (a(-1)) and the uranium retardation factor:recoil loss factor ratio R:p in the aquifer. The (14)C age of the water was estimated to be between modern and >35,000 years. The (234)U-(238)U age of the water was estimated to be between 260 and 582,000 years. The Rd:p ratio decreases with increasing groundwater residence time in the aquifer from n × 10(-5) to n × 10(-7) a(-1). This finding is observed because the TDS increases in that direction from 0.2 to 9 g L(-1), and accordingly, the mineral saturation indices increase. Relatively high values of R:p (200-1000) characterize aquifers in sandy-clayey sediments from the Late Pleistocene and the deepest parts of the Vendian strata. In samples from the sandstones of the upper part of the Vendian strata, the R:p value is ∼ 24, i.e., sorption processes are

16. A Metastatistical Approach to Satellite Estimates of Extreme Rainfall Events

Science.gov (United States)

Zorzetto, E.; Marani, M.

2017-12-01

The estimation of the average recurrence interval of intense rainfall events is a central issue for both hydrologic modeling and engineering design. These estimates require the inference of the properties of the right tail of the statistical distribution of precipitation, a task often performed using the Generalized Extreme Value (GEV) distribution, estimated either from a samples of annual maxima (AM) or with a peaks over threshold (POT) approach. However, these approaches require long and homogeneous rainfall records, which often are not available, especially in the case of remote-sensed rainfall datasets. We use here, and tailor it to remotely-sensed rainfall estimates, an alternative approach, based on the metastatistical extreme value distribution (MEVD), which produces estimates of rainfall extreme values based on the probability distribution function (pdf) of all measured ordinary' rainfall event. This methodology also accounts for the interannual variations observed in the pdf of daily rainfall by integrating over the sample space of its random parameters. We illustrate the application of this framework to the TRMM Multi-satellite Precipitation Analysis rainfall dataset, where MEVD optimally exploits the relatively short datasets of satellite-sensed rainfall, while taking full advantage of its high spatial resolution and quasi-global coverage. Accuracy of TRMM precipitation estimates and scale issues are here investigated for a case study located in the Little Washita watershed, Oklahoma, using a dense network of rain gauges for independent ground validation. The methodology contributes to our understanding of the risk of extreme rainfall events, as it allows i) an optimal use of the TRMM datasets in estimating the tail of the probability distribution of daily rainfall, and ii) a global mapping of daily rainfall extremes and distributional tail properties, bridging the existing gaps in rain gauges networks.

17. Retinal blood vessel segmentation in high resolution fundus photographs using automated feature parameter estimation

Science.gov (United States)

Orlando, José Ignacio; Fracchia, Marcos; del Río, Valeria; del Fresno, Mariana

2017-11-01

Several ophthalmological and systemic diseases are manifested through pathological changes in the properties and the distribution of the retinal blood vessels. The characterization of such alterations requires the segmentation of the vasculature, which is a tedious and time-consuming task that is infeasible to be performed manually. Numerous attempts have been made to propose automated methods for segmenting the retinal vasculature from fundus photographs, although their application in real clinical scenarios is usually limited by their ability to deal with images taken at different resolutions. This is likely due to the large number of parameters that have to be properly calibrated according to each image scale. In this paper we propose to apply a novel strategy for automated feature parameter estimation, combined with a vessel segmentation method based on fully connected conditional random fields. The estimation model is learned by linear regression from structural properties of the images and known optimal configurations, that were previously obtained for low resolution data sets. Our experiments in high resolution images show that this approach is able to estimate appropriate configurations that are suitable for performing the segmentation task without requiring to re-engineer parameters. Furthermore, our combined approach reported state of the art performance on the benchmark data set HRF, as measured in terms of the F1-score and the Matthews correlation coefficient.

18. Estimating the solute transport parameters of the spatial fractional advection-dispersion equation using Bees Algorithm.

Science.gov (United States)

2017-08-01

This study represents the first attempt to estimate the solute transport parameters of the spatial fractional advection-dispersion equation using Bees Algorithm. The numerical studies as well as the experimental studies were performed to certify the integrity of Bees Algorithm. The experimental ones were conducted in a sandbox for homogeneous and heterogeneous soils. A detailed comparative study was carried out between the results obtained from Bees Algorithm and those from Genetic Algorithm and LSQNONLIN routines in FracFit toolbox. The results indicated that, in general, the Bees Algorithm much more accurately appraised the sFADE parameters in comparison with Genetic Algorithm and LSQNONLIN, especially in the heterogeneous soil and for α values near to 1 in the numerical study. Also, the results obtained from Bees Algorithm were more reliable than those from Genetic Algorithm. The Bees Algorithm showed the relative similar performances for all cases, while the Genetic Algorithm and the LSQNONLIN yielded different performances for various cases. The performance of LSQNONLIN strongly depends on the initial guess values so that, compared to the Genetic Algorithm, it can more accurately estimate the sFADE parameters by taking into consideration the suitable initial guess values. To sum up, the Bees Algorithm was found to be very simple, robust and accurate approach to estimate the transport parameters of the spatial fractional advection-dispersion equation. Copyright © 2017 Elsevier B.V. All rights reserved.

19. An evolutionary firefly algorithm for the estimation of nonlinear biological model parameters.

Directory of Open Access Journals (Sweden)

Afnizanfaizal Abdullah

Full Text Available The development of accurate computational models of biological processes is fundamental to computational systems biology. These models are usually represented by mathematical expressions that rely heavily on the system parameters. The measurement of these parameters is often difficult. Therefore, they are commonly estimated by fitting the predicted model to the experimental data using optimization methods. The complexity and nonlinearity of the biological processes pose a significant challenge, however, to the development of accurate and fast optimization methods. We introduce a new hybrid optimization method incorporating the Firefly Algorithm and the evolutionary operation of the Differential Evolution method. The proposed method improves solutions by neighbourhood search using evolutionary procedures. Testing our method on models for the arginine catabolism and the negative feedback loop of the p53 signalling pathway, we found that it estimated the parameters with high accuracy and within a reasonable computation time compared to well-known approaches, including Particle Swarm Optimization, Nelder-Mead, and Firefly Algorithm. We have also verified the reliability of the parameters estimated by the method using an a posteriori practical identifiability test.

20. Estimating water retention curves and strength properties of unsaturated sandy soils from basic soil gradation parameters

Science.gov (United States)

Wang, Ji-Peng; Hu, Nian; François, Bertrand; Lambert, Pierre

2017-07-01

This study proposed two pedotransfer functions (PTFs) to estimate sandy soil water retention curves. It is based on the van Genuchten's water retention model and from a semiphysical and semistatistical approach. Basic gradation parameters of d60 as particle size at 60% passing and the coefficient of uniformity Cu are employed in the PTFs with two idealized conditions, the monosized scenario and the extremely polydisperse condition, satisfied. Water retention tests are carried out on eight granular materials with narrow particle size distributions as supplementary data of the UNSODA database. The air entry value is expressed as inversely proportional to d60 and the parameter n, which is related to slope of water retention curve, is a function of Cu. The proposed PTFs, although have fewer parameters, have better fitness than previous PTFs for sandy soils. Furthermore, by incorporating with the suction stress definition, the proposed pedotransfer functions are imbedded in shear strength equations which provide a way to estimate capillary induced tensile strength or cohesion at a certain suction or degree of saturation from basic soil gradation parameters. The estimation shows quantitative agreement with experimental data in literature, and it also explains that the capillary-induced cohesion is generally higher for materials with finer mean particle size or higher polydispersity.

1. An evolutionary firefly algorithm for the estimation of nonlinear biological model parameters.

Science.gov (United States)

Abdullah, Afnizanfaizal; Deris, Safaai; Anwar, Sohail; Arjunan, Satya N V

2013-01-01

The development of accurate computational models of biological processes is fundamental to computational systems biology. These models are usually represented by mathematical expressions that rely heavily on the system parameters. The measurement of these parameters is often difficult. Therefore, they are commonly estimated by fitting the predicted model to the experimental data using optimization methods. The complexity and nonlinearity of the biological processes pose a significant challenge, however, to the development of accurate and fast optimization methods. We introduce a new hybrid optimization method incorporating the Firefly Algorithm and the evolutionary operation of the Differential Evolution method. The proposed method improves solutions by neighbourhood search using evolutionary procedures. Testing our method on models for the arginine catabolism and the negative feedback loop of the p53 signalling pathway, we found that it estimated the parameters with high accuracy and within a reasonable computation time compared to well-known approaches, including Particle Swarm Optimization, Nelder-Mead, and Firefly Algorithm. We have also verified the reliability of the parameters estimated by the method using an a posteriori practical identifiability test.

2. Estimating the solute transport parameters of the spatial fractional advection-dispersion equation using Bees Algorithm

Science.gov (United States)

2017-08-01

This study represents the first attempt to estimate the solute transport parameters of the spatial fractional advection-dispersion equation using Bees Algorithm. The numerical studies as well as the experimental studies were performed to certify the integrity of Bees Algorithm. The experimental ones were conducted in a sandbox for homogeneous and heterogeneous soils. A detailed comparative study was carried out between the results obtained from Bees Algorithm and those from Genetic Algorithm and LSQNONLIN routines in FracFit toolbox. The results indicated that, in general, the Bees Algorithm much more accurately appraised the sFADE parameters in comparison with Genetic Algorithm and LSQNONLIN, especially in the heterogeneous soil and for α values near to 1 in the numerical study. Also, the results obtained from Bees Algorithm were more reliable than those from Genetic Algorithm. The Bees Algorithm showed the relative similar performances for all cases, while the Genetic Algorithm and the LSQNONLIN yielded different performances for various cases. The performance of LSQNONLIN strongly depends on the initial guess values so that, compared to the Genetic Algorithm, it can more accurately estimate the sFADE parameters by taking into consideration the suitable initial guess values. To sum up, the Bees Algorithm was found to be very simple, robust and accurate approach to estimate the transport parameters of the spatial fractional advection-dispersion equation.

3. Parameter estimation of breast tumour using dynamic neural network from thermal pattern

Directory of Open Access Journals (Sweden)

Elham Saniei

2016-11-01

Full Text Available This article presents a new approach for estimating the depth, size, and metabolic heat generation rate of a tumour. For this purpose, the surface temperature distribution of a breast thermal image and the dynamic neural network was used. The research consisted of two steps: forward and inverse. For the forward section, a finite element model was created. The Pennes bio-heat equation was solved to find surface and depth temperature distributions. Data from the analysis, then, were used to train the dynamic neural network model (DNN. Results from the DNN training/testing confirmed those of the finite element model. For the inverse section, the trained neural network was applied to estimate the depth temperature distribution (tumour position from the surface temperature profile, extracted from the thermal image. Finally, tumour parameters were obtained from the depth temperature distribution. Experimental findings (20 patients were promising in terms of the model’s potential for retrieving tumour parameters.

4. Parameters estimation online for Lorenz system by a novel quantum-behaved particle swarm optimization

International Nuclear Information System (INIS)

Gao Fei; Tong Hengqing; Li Zhuoqiu

2008-01-01

This paper proposes a novel quantum-behaved particle swarm optimization (NQPSO) for the estimation of chaos' unknown parameters by transforming them into nonlinear functions' optimization. By means of the techniques in the following three aspects: contracting the searching space self-adaptively; boundaries restriction strategy; substituting the particles' convex combination for their centre of mass, this paper achieves a quite effective search mechanism with fine equilibrium between exploitation and exploration. Details of applying the proposed method and other methods into Lorenz systems are given, and experiments done show that NQPSO has better adaptability, dependability and robustness. It is a successful approach in unknown parameter estimation online especially in the cases with white noises

5. APPLYING TEACHING-LEARNING TO ARTIFICIAL BEE COLONY FOR PARAMETER OPTIMIZATION OF SOFTWARE EFFORT ESTIMATION MODEL

Directory of Open Access Journals (Sweden)

THANH TUNG KHUAT

2017-05-01

Full Text Available Artificial Bee Colony inspired by the foraging behaviour of honey bees is a novel meta-heuristic optimization algorithm in the community of swarm intelligence algorithms. Nevertheless, it is still insufficient in the speed of convergence and the quality of solutions. This paper proposes an approach in order to tackle these downsides by combining the positive aspects of TeachingLearning based optimization and Artificial Bee Colony. The performance of the proposed method is assessed on the software effort estimation problem, which is the complex and important issue in the project management. Software developers often carry out the software estimation in the early stages of the software development life cycle to derive the required cost and schedule for a project. There are a large number of methods for effort estimation in which COCOMO II is one of the most widely used models. However, this model has some restricts because its parameters have not been optimized yet. In this work, therefore, we will present the approach to overcome this limitation of COCOMO II model. The experiments have been conducted on NASA software project dataset and the obtained results indicated that the improvement of parameters provided better estimation capabilities compared to the original COCOMO II model.

6. Maximum likelihood estimation of biophysical parameters of synaptic receptors from macroscopic currents

Directory of Open Access Journals (Sweden)

Andrey eStepanyuk

2014-10-01

Full Text Available Dendritic integration and neuronal firing patterns strongly depend on biophysical properties of synaptic ligand-gated channels. However, precise estimation of biophysical parameters of these channels in their intrinsic environment is complicated and still unresolved problem. Here we describe a novel method based on a maximum likelihood approach that allows to estimate not only the unitary current of synaptic receptor channels but also their multiple conductance levels, kinetic constants, the number of receptors bound with a neurotransmitter and the peak open probability from experimentally feasible number of postsynaptic currents. The new method also improves the accuracy of evaluation of unitary current as compared to the peak-scaled non-stationary fluctuation analysis, leading to a possibility to precisely estimate this important parameter from a few postsynaptic currents recorded in steady-state conditions. Estimation of unitary current with this method is robust even if postsynaptic currents are generated by receptors having different kinetic parameters, the case when peak-scaled non-stationary fluctuation analysis is not applicable. Thus, with the new method, routinely recorded postsynaptic currents could be used to study the properties of synaptic receptors in their native biochemical environment.

7. Four-dimensional parameter estimation of plane waves using swarming intelligence

International Nuclear Information System (INIS)

2014-01-01

This paper proposes an efficient approach for four-dimensional (4D) parameter estimation of plane waves impinging on a 2-L shape array. The 4D parameters include amplitude, frequency and the two-dimensional (2D) direction of arrival, namely, azimuth and elevation angles. The proposed approach is based on memetic computation, in which the global optimizer, particle swarm optimization is hybridized with a rapid local search technique, pattern search. For this purpose, a new multi-objective fitness function is used. This fitness function is the combination of mean square error and the correlation between the normalized desired and estimated vectors. The proposed hybrid scheme is not only compared with individual performances of particle swarm optimization and pattern search, but also with the performance of the hybrid genetic algorithm and that of the traditional approach. A large number of Monte—Carlo simulations are carried out to validate the performance of the proposed scheme. It gives promising results in terms of estimation accuracy, convergence rate, proximity effect and robustness against noise. (interdisciplinary physics and related areas of science and technology)

8. Estimation of Key Parameters of the Coupled Energy and Water Model by Assimilating Land Surface Data

Science.gov (United States)

2017-12-01

Accurate estimation of land surface heat and moisture fluxes, as well as root zone soil moisture, is crucial in various hydrological, meteorological, and agricultural applications. Field measurements of these fluxes are costly and cannot be readily scaled to large areas relevant to weather and climate studies. Therefore, there is a need for techniques to make quantitative estimates of heat and moisture fluxes using land surface state observations that are widely available from remote sensing across a range of scale. In this work, we applies the variational data assimilation approach to estimate land surface fluxes and soil moisture profile from the implicit information contained Land Surface Temperature (LST) and Soil Moisture (SM) (hereafter the VDA model). The VDA model is focused on the estimation of three key parameters: 1- neutral bulk heat transfer coefficient (CHN), 2- evaporative fraction from soil and canopy (EF), and 3- saturated hydraulic conductivity (Ksat). CHN and EF regulate the partitioning of available energy between sensible and latent heat fluxes. Ksat is one of the main parameters used in determining infiltration, runoff, groundwater recharge, and in simulating hydrological processes. In this study, a system of coupled parsimonious energy and water model will constrain the estimation of three unknown parameters in the VDA model. The profile of SM (LST) at multiple depths is estimated using moisture diffusion (heat diffusion) equation. In this study, the uncertainties of retrieved unknown parameters and fluxes are estimated from the inverse of Hesian matrix of cost function which is computed using the Lagrangian methodology. Analysis of uncertainty provides valuable information about the accuracy of estimated parameters and their correlation and guide the formulation of a well-posed estimation problem. The results of proposed algorithm are validated with a series of experiments using a synthetic data set generated by the simultaneous heat and

9. Parameter estimation of photovoltaic cells using an improved chaotic whale optimization algorithm

International Nuclear Information System (INIS)

Oliva, Diego; Abd El Aziz, Mohamed; Ella Hassanien, Aboul

2017-01-01

Highlights: •We modify the whale algorithm using chaotic maps. •We apply a chaotic algorithm to estimate parameter of photovoltaic cells. •We perform a study of chaos in whale algorithm. •Several comparisons and metrics support the experimental results. •We test the method with data from real solar cells. -- Abstract: The using of solar energy has been increased since it is a clean source of energy. In this way, the design of photovoltaic cells has attracted the attention of researchers over the world. There are two main problems in this field: having a useful model to characterize the solar cells and the absence of data about photovoltaic cells. This situation even affects the performance of the photovoltaic modules (panels). The characteristics of the current vs. voltage are used to describe the behavior of solar cells. Considering such values, the design problem involves the solution of the complex non-linear and multi-modal objective functions. Different algorithms have been proposed to identify the parameters of the photovoltaic cells and panels. Most of them commonly fail in finding the optimal solutions. This paper proposes the Chaotic Whale Optimization Algorithm (CWOA) for the parameters estimation of solar cells. The main advantage of the proposed approach is using the chaotic maps to compute and automatically adapt the internal parameters of the optimization algorithm. This situation is beneficial in complex problems, because along the iterative process, the proposed algorithm improves their capabilities to search for the best solution. The modified method is able to optimize complex and multimodal objective functions. For example, the function for the estimation of parameters of solar cells. To illustrate the capabilities of the proposed algorithm in the solar cell design, it is compared with other optimization methods over different datasets. Moreover, the experimental results support the improved performance of the proposed approach

10. Adaptive control based on an on-line parameter estimation of an upper limb exoskeleton.

Science.gov (United States)

2017-07-01

This paper presents an adaptive control strategy for an upper-limb exoskeleton based on an on-line dynamic parameter estimator. The objective is to improve the control performance of this system that plays a critical role in assisting patients for shoulder, elbow and wrist joint movements. In general, the dynamic parameters of the human limb are unknown and differ from a person to another, which degrade the performances of the exoskeleton-human control system. For this reason, the proposed control scheme contains a supplementary loop based on a new efficient on-line estimator of the dynamic parameters. Indeed, the latter is acting upon the parameter adaptation of the controller to ensure the performances of the system in the presence of parameter uncertainties and perturbations. The exoskeleton used in this work is presented and a physical model of the exoskeleton interacting with a 7 Degree of Freedom (DoF) upper limb model is generated using the SimMechanics library of MatLab/Simulink. To illustrate the effectiveness of the proposed approach, an example of passive rehabilitation movements is performed using multi-body dynamic simulation. The aims is to maneuver the exoskeleton that drive the upper limb to track desired trajectories in the case of the passive arm movements.

11. Multiple-hit parameter estimation in monolithic detectors.

Science.gov (United States)

Hunter, William C J; Barrett, Harrison H; Lewellen, Tom K; Miyaoka, Robert S

2013-02-01

We examine a maximum-a-posteriori method for estimating the primary interaction position of gamma rays with multiple interaction sites (hits) in a monolithic detector. In assessing the performance of a multiple-hit estimator over that of a conventional one-hit estimator, we consider a few different detector and readout configurations of a 50-mm-wide square cerium-doped lutetium oxyorthosilicate block. For this study, we use simulated data from SCOUT, a Monte-Carlo tool for photon tracking and modeling scintillation- camera output. With this tool, we determine estimate bias and variance for a multiple-hit estimator and compare these with similar metrics for a one-hit maximum-likelihood estimator, which assumes full energy deposition in one hit. We also examine the effect of event filtering on these metrics; for this purpose, we use a likelihood threshold to reject signals that are not likely to have been produced under the assumed likelihood model. Depending on detector design, we observe a 1%-12% improvement of intrinsic resolution for a 1-or-2-hit estimator as compared with a 1-hit estimator. We also observe improved differentiation of photopeak events using a 1-or-2-hit estimator as compared with the 1-hit estimator; more than 6% of photopeak events that were rejected by likelihood filtering for the 1-hit estimator were accurately identified as photopeak events and positioned without loss of resolution by a 1-or-2-hit estimator; for PET, this equates to at least a 12% improvement in coincidence-detection efficiency with likelihood filtering applied.

12. Single-Channel Blind Estimation of Reverberation Parameters

DEFF Research Database (Denmark)

Doire, C.S.J.; Brookes, M. D.; Naylor, P. A.

2015-01-01

The reverberation of an acoustic channel can be characterised by two frequency-dependent parameters: the reverberation time and the direct-to-reverberant energy ratio. This paper presents an algorithm for blindly determining these parameters from a single-channel speech signal. The algorithm uses...

13. Uncertainty of Modal Parameters Estimated by ARMA Models

DEFF Research Database (Denmark)

Jensen, Jakob Laigaard; Brincker, Rune; Rytter, Anders

In this paper the uncertainties of identified modal parameters such as eigenfrequencies and damping ratios are assessed. From the measured response of dynamic excited structures the modal parameters may be identified and provide important structural knowledge. However the uncertainty of the param...

14. Estimation of source parameters of Chamoli Earthquake, India

R. Narasimhan, Krishtel eMaging Solutions

meter studies, in different parts of the world. Singh et al (1979) and Sharma and Wason (1994, 1995) have calculated source parameters for Himalayan and nearby regions. To the best of this authors' knowledge, the source parameter studies using strong motion data have not been carried out in India so far, though similar ...

15. Estimation of kinetic and thermodynamic ligand-binding parameters using computational strategies.

Science.gov (United States)

Deganutti, Giuseppe; Moro, Stefano

2017-04-01

Kinetic and thermodynamic ligand-protein binding parameters are gaining growing importance as key information to consider in drug discovery. The determination of the molecular structures, using particularly x-ray and NMR techniques, is crucial for understanding how a ligand recognizes its target in the final binding complex. However, for a better understanding of the recognition processes, experimental studies of ligand-protein interactions are needed. Even though several techniques can be used to investigate both thermodynamic and kinetic profiles for a ligand-protein complex, these procedures are very often laborious, time consuming and expensive. In the last 10 years, computational approaches have enormous potential in providing insights into each of the above effects and in parsing their contributions to the changes in both kinetic and thermodynamic binding parameters. The main purpose of this review is to summarize the state of the art of computational strategies for estimating the kinetic and thermodynamic parameters of a ligand-protein binding.

16. Inverse estimation of source parameters of oceanic radioactivity dispersion models associated with the Fukushima accident

Directory of Open Access Journals (Sweden)

Y. Miyazawa

2013-04-01

Full Text Available With combined use of the ocean–atmosphere simulation models and field observation data, we evaluate the parameters associated with the total caesium-137 amounts of the direct release into the ocean and atmospheric deposition over the western North Pacific caused by the accident of Fukushima Daiichi nuclear power plant (FNPP that occurred in March 2011. The Green's function approach is adopted for the estimation of two parameters determining the total emission amounts for the period from 12 March to 6 May 2011. It is confirmed that the validity of the estimation depends on the simulation skill near FNPP. The total amount of the direct release is estimated as 5.5–5.9 × 1015 Bq, while that of the atmospheric deposition is estimated as 5.5–9.7 × 1015 Bq, which indicates broader range of the estimate than that of the direct release owing to uncertainty of the dispersion widely spread over the western North Pacific.

17. A theoretical signal processing framework for linear diffusion MRI: Implications for parameter estimation and experiment design.

Science.gov (United States)

2017-11-01

The data measured in diffusion MRI can be modeled as the Fourier transform of the Ensemble Average Propagator (EAP), a probability distribution that summarizes the molecular diffusion behavior of the spins within each voxel. This Fourier relationship is potentially advantageous because of the extensive theory that has been developed to characterize the sampling requirements, accuracy, and stability of linear Fourier reconstruction methods. However, existing diffusion MRI data sampling and signal estimation methods have largely been developed and tuned without the benefit of such theory, instead relying on approximations, intuition, and extensive empirical evaluation. This paper aims to address this discrepancy by introducing a novel theoretical signal processing framework for diffusion MRI. The new framework can be used to characterize arbitrary linear diffusion estimation methods with arbitrary q-space sampling, and can be used to theoretically evaluate and compare the accuracy, resolution, and noise-resilience of different data acquisition and parameter estimation techniques. The framework is based on the EAP, and makes very limited modeling assumptions. As a result, the approach can even provide new insight into the behavior of model-based linear diffusion estimation methods in contexts where the modeling assumptions are inaccurate. The practical usefulness of the proposed framework is illustrated using both simulated and real diffusion MRI data in applications such as choosing between different parameter estimation methods and choosing between different q-space sampling schemes. Copyright © 2017 Elsevier Inc. All rights reserved.

18. Forest parameter estimation using polarimetric SAR interferometry techniques at low frequencies

International Nuclear Information System (INIS)

Lee, Seung-Kuk

2013-01-01

Polarimetric Synthetic Aperture Radar Interferometry (Pol-InSAR) is an active radar remote sensing technique based on the coherent combination of both polarimetric and interferometric observables. The Pol-InSAR technique provided a step forward in quantitative forest parameter estimation. In the last decade, airborne SAR experiments evaluated the potential of Pol-InSAR techniques to estimate forest parameters (e.g., the forest height and biomass) with high accuracy over various local forest test sites. This dissertation addresses the actual status, potentials and limitations of Pol-InSAR inversion techniques for 3-D forest parameter estimations on a global scale using lower frequencies such as L- and P-band. The multi-baseline Pol-InSAR inversion technique is applied to optimize the performance with respect to the actual level of the vertical wave number and to mitigate the impact of temporal decorrelation on the Pol-InSAR forest parameter inversion. Temporal decorrelation is a critical issue for successful Pol-InSAR inversion in the case of repeat-pass Pol-InSAR data, as provided by conventional satellites or airborne SAR systems. Despite the limiting impact of temporal decorrelation in Pol-InSAR inversion, it remains a poorly understood factor in forest height inversion. Therefore, the main goal of this dissertation is to provide a quantitative estimation of the temporal decorrelation effects by using multi-baseline Pol-InSAR data. A new approach to quantify the different temporal decorrelation components is proposed and discussed. Temporal decorrelation coefficients are estimated for temporal baselines ranging from 10 minutes to 54 days and are converted to height inversion errors. In addition, the potential of Pol-InSAR forest parameter estimation techniques is addressed and projected onto future spaceborne system configurations and mission scenarios (Tandem-L and BIOMASS satellite missions at L- and P-band). The impact of the system parameters (e.g., bandwidth

19. Leading-Edge Flow Sensing for Aerodynamic Parameter Estimation

Science.gov (United States)

The identification of inflow air data quantities such as airspeed, angle of attack, and local lift coefficient on various sections of a wing or rotor blade provides the capability for load monitoring, aerodynamic diagnostics, and control on devices ranging from air vehicles to wind turbines. Real-time measurement of aerodynamic parameters during flight provides the ability to enhance aircraft operating capabilities while preventing dangerous stall situations. This thesis presents a novel Leading-Edge Flow Sensing (LEFS) algorithm for the determination of the air -data parameters using discrete surface pressures measured at a few ports in the vicinity of the leading edge of a wing or blade section. The approach approximates the leading-edge region of the airfoil as a parabola and uses pressure distribution from the exact potential-ow solution for the parabola to _t the pressures measured from the ports. Pressures sensed at five discrete locations near the leading edge of an airfoil are given as input to the algorithm to solve the model using a simple nonlinear regression. The algorithm directly computes the inflow velocity, the stagnation-point location, section angle of attack and lift coefficient. The performance of the algorithm is assessed using computational and experimental data in the literature for airfoils under different ow conditions. The results show good correlation between the actual and predicted aerodynamic quantities within the pre-stall regime, even for a rotating blade section. Sensing the deviation of the aerodynamic behavior from the linear regime requires additional information on the location of ow separation on the airfoil surface. Bio-inspired artificial hair sensors were explored as a part of the current research for stall detection. The response of such artificial micro-structures can identify critical ow characteristics, which relate directly to the stall behavior. The response of the microfences was recorded via an optical microscope for

20. Estimation of the petrophysical parameters of sediments from Chad ...

African Journals Online (AJOL)

Porosity was estimated from three methods, and polynomial trends having fits ranging between 0.0604 and 0.478 describe depth - porosity variations. Interpretation of the trends revealed lithology trend that agree with the trends of shaliness. Estimates of average effective porosities of formations favorably compared with ...

1. ASTROPHYSICAL PRIOR INFORMATION AND GRAVITATIONAL-WAVE PARAMETER ESTIMATION

International Nuclear Information System (INIS)

Pankow, Chris; Sampson, Laura; Perri, Leah; Chase, Eve; Coughlin, Scott; Zevin, Michael; Kalogera, Vassiliki

2017-01-01

The detection of electromagnetic counterparts to gravitational waves (GWs) has great promise for the investigation of many scientific questions. While it is well known that certain orientation parameters can reduce uncertainty in other related parameters, it was also hoped that the detection of an electromagnetic signal in conjunction with a GW could augment the measurement precision of the mass and spin from the gravitational signal itself. That is, knowledge of the sky location, inclination, and redshift of a binary could break degeneracies between these extrinsic, coordinate-dependent parameters and the physical parameters that are intrinsic to the binary. In this paper, we investigate this issue by assuming perfect knowledge of extrinsic parameters, and assessing the maximal impact of this knowledge on our ability to extract intrinsic parameters. We recover similar gains in extrinsic recovery to earlier work; however, we find only modest improvements in a few intrinsic parameters—namely the primary component’s spin. We thus conclude that, even in the best case, the use of additional information from electromagnetic observations does not improve the measurement of the intrinsic parameters significantly.

2. ASTROPHYSICAL PRIOR INFORMATION AND GRAVITATIONAL-WAVE PARAMETER ESTIMATION

Energy Technology Data Exchange (ETDEWEB)

Pankow, Chris; Sampson, Laura; Perri, Leah; Chase, Eve; Coughlin, Scott; Zevin, Michael; Kalogera, Vassiliki [Center for Interdisciplinary Exploration and Research in Astrophysics (CIERA) and Department of Physics and Astronomy, Northwestern University, 2145 Sheridan Road, Evanston, IL 60208 (United States)

2017-01-10

The detection of electromagnetic counterparts to gravitational waves (GWs) has great promise for the investigation of many scientific questions. While it is well known that certain orientation parameters can reduce uncertainty in other related parameters, it was also hoped that the detection of an electromagnetic signal in conjunction with a GW could augment the measurement precision of the mass and spin from the gravitational signal itself. That is, knowledge of the sky location, inclination, and redshift of a binary could break degeneracies between these extrinsic, coordinate-dependent parameters and the physical parameters that are intrinsic to the binary. In this paper, we investigate this issue by assuming perfect knowledge of extrinsic parameters, and assessing the maximal impact of this knowledge on our ability to extract intrinsic parameters. We recover similar gains in extrinsic recovery to earlier work; however, we find only modest improvements in a few intrinsic parameters—namely the primary component’s spin. We thus conclude that, even in the best case, the use of additional information from electromagnetic observations does not improve the measurement of the intrinsic parameters significantly.

3. Physics of ultrasonic wave propagation in bone and heart characterized using Bayesian parameter estimation

Science.gov (United States)

Anderson, Christian Carl

This Dissertation explores the physics underlying the propagation of ultrasonic waves in bone and in heart tissue through the use of Bayesian probability theory. Quantitative ultrasound is a noninvasive modality used for clinical detection, characterization, and evaluation of bone quality and cardiovascular disease. Approaches that extend the state of knowledge of the physics underpinning the interaction of ultrasound with inherently inhomogeneous and isotropic tissue have the potential to enhance its clinical utility. Simulations of fast and slow compressional wave propagation in cancellous bone were carried out to demonstrate the plausibility of a proposed explanation for the widely reported anomalous negative dispersion in cancellous bone. The results showed that negative dispersion could arise from analysis that proceeded under the assumption that the data consist of only a single ultrasonic wave, when in fact two overlapping and interfering waves are present. The confounding effect of overlapping fast and slow waves was addressed by applying Bayesian parameter estimation to simulated data, to experimental data acquired on bone-mimicking phantoms, and to data acquired in vitro on cancellous bone. The Bayesian approach successfully estimated the properties of the individual fast and slow waves even when they strongly overlapped in the acquired data. The Bayesian parameter estimation technique was further applied to an investigation of the anisotropy of ultrasonic properties in cancellous bone. The degree to which fast and slow waves overlap is partially determined by the angle of insonation of ultrasound relative to the predominant direction of trabecular orientation. In the past, studies of anisotropy have been limited by interference between fast and slow waves over a portion of the range of insonation angles. Bayesian analysis estimated attenuation, velocity, and amplitude parameters over the entire range of insonation angles, allowing a more complete

4. Hierarchical parameter estimation of DFIG and drive train system in a wind turbine generator

Institute of Scientific and Technical Information of China (English)

Xueping PAN; Ping JU; Feng WU; Yuqing JIN

2017-01-01

A new hierarchical parameter estimation method for doubly fed induction generator (DFIG) and drive train system in a wind turbine generator (WTG) is proposed in this paper.Firstly,the parameters of the DFIG and the drive train are estimated locally under different types of disturbances.Secondly,a coordination estimation method is further applied to identify the parameters of the DFIG and the drive train simultaneously with the purpose of attaining the global optimal estimation results.The main benefit of the proposed scheme is the improved estimation accuracy.Estimation results confirm the applicability of the proposed estimation technique.

5. Uncertainty of Modal Parameters Estimated by ARMA Models

DEFF Research Database (Denmark)

Jensen, Jacob Laigaard; Brincker, Rune; Rytter, Anders

1990-01-01

In this paper the uncertainties of identified modal parameters such as eidenfrequencies and damping ratios are assed. From the measured response of dynamic excited structures the modal parameters may be identified and provide important structural knowledge. However the uncertainty of the parameters...... by simulation study of a lightly damped single degree of freedom system. Identification by ARMA models has been choosen as system identification method. It is concluded that both the sampling interval and number of sampled points may play a significant role with respect to the statistical errors. Furthermore......, it is shown that the model errors may also contribute significantly to the uncertainty....

6. Estimating Soil and Root Parameters of Biofuel Crops using a Hydrogeophysical Inversion

Science.gov (United States)

Kuhl, A.; Kendall, A. D.; Van Dam, R. L.; Hyndman, D. W.

2017-12-01

Transpiration is the dominant pathway for continental water exchange to the atmosphere, and therefore a crucial aspect of modeling water balances at many scales. The root water uptake dynamics that control transpiration are dependent on soil water availability, as well as the root distribution. However, the root distribution is determined by many factors beyond the plant species alone, including climate conditions and soil texture. Despite the significant contribution of transpiration to global water fluxes, modelling the complex critical zone processes that drive root water uptake remains a challenge. Geophysical tools such as electrical resistivity (ER), have been shown to be highly sensitive to water dynamics in the unsaturated zone. ER data can be temporally and spatially robust, covering large areas or long time periods non-invasively, which is an advantage over in-situ methods. Previous studies have shown the value of using hydrogeophysical inversions to estimate soil properties. Others have used hydrological inversions to estimate both soil properties and root distribution parameters. In this study, we combine these two approaches to create a coupled hydrogeophysical inversion that estimates root and retention curve parameters for a HYDRUS model. To test the feasibility of this new approach, we estimated daily water fluxes and root growth for several biofuel crops at a long-term ecological research site in Southwest Michigan, using monthly ER data from 2009 through 2011. Time domain reflectometry data at seven depths was used to validate modeled soil moisture estimates throughout the model period. This hydrogeophysical inversion method shows promise for improving root distribution and transpiration estimates across a wide variety of settings.

7. Identifyability measures to select the parameters to be estimated in a solid-state fermentation distributed parameter model.

Science.gov (United States)

da Silveira, Christian L; Mazutti, Marcio A; Salau, Nina P G

2016-07-08

Process modeling can lead to of advantages such as helping in process control, reducing process costs and product quality improvement. This work proposes a solid-state fermentation distributed parameter model composed by seven differential equations with seventeen parameters to represent the process. Also, parameters estimation with a parameters identifyability analysis (PIA) is performed to build an accurate model with optimum parameters. Statistical tests were made to verify the model accuracy with the estimated parameters considering different assumptions. The results have shown that the model assuming substrate inhibition better represents the process. It was also shown that eight from the seventeen original model parameters were nonidentifiable and better results were obtained with the removal of these parameters from the estimation procedure. Therefore, PIA can be useful to estimation procedure, since it may reduce the number of parameters that can be evaluated. Further, PIA improved the model results, showing to be an important procedure to be taken. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 32:905-917, 2016. © 2016 American Institute of Chemical Engineers.

8. Tsunami Prediction and Earthquake Parameters Estimation in the Red Sea

KAUST Repository

Sawlan, Zaid A

2012-01-01

parameters and topography. This thesis introduces a real-time tsunami forecasting method that combines tsunami model with observations using a hybrid ensemble Kalman filter and ensemble Kalman smoother. The filter is used for state prediction while

9. Estimation of parameter sensitivities for stochastic reaction networks

KAUST Repository

Gupta, Ankit

2016-01-01

Quantification of the effects of parameter uncertainty is an important and challenging problem in Systems Biology. We consider this problem in the context of stochastic models of biochemical reaction networks where the dynamics is described as a

10. A novel parameter estimation method for metal oxide surge arrester ...

the program, which is based on MAPSO algorithm and can determine the fitness and parameters .... to solve many optimization problems (Kennedy & Eberhart 1995; Eberhart & Shi 2001; Gaing. 2003 ... describe the content of this concept. V el.

11. BIASED BEARINGS-ONIKY PARAMETER ESTIMATION FOR BISTATIC SYSTEM

Institute of Scientific and Technical Information of China (English)

Xu Benlian; Wang Zhiquan

2007-01-01

According to the biased angles provided by the bistatic sensors,the necessary condition of observability and Cramer-Rao low bounds for the bistatic system are derived and analyzed,respectively.Additionally,a dual Kalman filter method is presented with the purpose of eliminating the effect of biased angles on the state variable estimation.Finally,Monte-Carlo simulations are conducted in the observable scenario.Simulation results show that the proposed theory holds true,and the dual Kalman filter method can estimate state variable and biased angles simultaneously.Furthermore,the estimated results can achieve their Cramer-Rao low bounds.

12. Weibull Parameters Estimation Based on Physics of Failure Model

DEFF Research Database (Denmark)

Kostandyan, Erik; Sørensen, John Dalsgaard

2012-01-01

Reliability estimation procedures are discussed for the example of fatigue development in solder joints using a physics of failure model. The accumulated damage is estimated based on a physics of failure model, the Rainflow counting algorithm and the Miner’s rule. A threshold model is used...... for degradation modeling and failure criteria determination. The time dependent accumulated damage is assumed linearly proportional to the time dependent degradation level. It is observed that the deterministic accumulated damage at the level of unity closely estimates the characteristic fatigue life of Weibull...

13. Estimating kinetic mechanisms with prior knowledge I: Linear parameter constraints.

Science.gov (United States)

Salari, Autoosa; Navarro, Marco A; Milescu, Mirela; Milescu, Lorin S

2018-02-05

To understand how ion channels and other proteins function at the molecular and cellular levels, one must decrypt their kinetic mechanisms. Sophisticated algorithms have been developed that can be used to extract kinetic parameters from a variety of experimental data types. However, formulating models that not only explain new data, but are also consistent with existing knowledge, remains a challenge. Here, we present a two-part study describing a mathematical and computational formalism that can be used to enforce prior knowledge into the model using constraints. In this first part, we focus on constraints that enforce explicit linear relationships involving rate constants or other model parameters. We develop a simple, linear algebra-based transformation that can be applied to enforce many types of model properties and assumptions, such as microscopic reversibility, allosteric gating, and equality and inequality parameter relationships. This transformation converts the set of linearly interdependent model parameters into a reduced set of independent parameters, which can be passed to an automated search engine for model optimization. In the companion article, we introduce a complementary method that can be used to enforce arbitrary parameter relationships and any constraints that quantify the behavior of the model under certain conditions. The procedures described in this study can, in principle, be coupled to any of the existing methods for solving molecular kinetics for ion channels or other proteins. These concepts can be used not only to enforce existing knowledge but also to formulate and test new hypotheses. © 2018 Salari et al.

14. Stellar atmospheric parameter estimation using Gaussian process regression

Science.gov (United States)

Bu, Yude; Pan, Jingchang

2015-02-01

As is well known, it is necessary to derive stellar parameters from massive amounts of spectral data automatically and efficiently. However, in traditional automatic methods such as artificial neural networks (ANNs) and kernel regression (KR), it is often difficult to optimize the algorithm structure and determine the optimal algorithm parameters. Gaussian process regression (GPR) is a recently developed method that has been proven to be capable of overcoming these difficulties. Here we apply GPR to derive stellar atmospheric parameters from spectra. Through evaluating the performance of GPR on Sloan Digital Sky Survey (SDSS) spectra, Medium resolution Isaac Newton Telescope Library of Empirical Spectra (MILES) spectra, ELODIE spectra and the spectra of member stars of galactic globular clusters, we conclude that GPR can derive stellar parameters accurately and precisely, especially when we use data preprocessed with principal component analysis (PCA). We then compare the performance of GPR with that of several widely used regression methods (ANNs, support-vector regression and KR) and find that with GPR it is easier to optimize structures and parameters and more efficient and accurate to extract atmospheric parameters.

15. Retrospective forecast of ETAS model with daily parameters estimate

Science.gov (United States)

Falcone, Giuseppe; Murru, Maura; Console, Rodolfo; Marzocchi, Warner; Zhuang, Jiancang

2016-04-01

We present a retrospective ETAS (Epidemic Type of Aftershock Sequence) model based on the daily updating of free parameters during the background, the learning and the test phase of a seismic sequence. The idea was born after the 2011 Tohoku-Oki earthquake. The CSEP (Collaboratory for the Study of Earthquake Predictability) Center in Japan provided an appropriate testing benchmark for the five 1-day submitted models. Of all the models, only one was able to successfully predict the number of events that really happened. This result was verified using both the real time and the revised catalogs. The main cause of the failure was in the underestimation of the forecasted events, due to model parameters maintained fixed during the test. Moreover, the absence in the learning catalog of an event similar to the magnitude of the mainshock (M9.0), which drastically changed the seismicity in the area, made the learning parameters not suitable to describe the real seismicity. As an example of this methodological development we show the evolution of the model parameters during the last two strong seismic sequences in Italy: the 2009 L'Aquila and the 2012 Reggio Emilia episodes. The achievement of the model with daily updated parameters is compared with that of same model where the parameters remain fixed during the test time.

16. Low Complexity Parameter Estimation For Off-the-Grid Targets

KAUST Repository

Jardak, Seifallah; Ahmed, Sajid; Alouini, Mohamed-Slim

2015-01-01

In multiple-input multiple-output radar, to estimate the reflection coefficient, spatial location, and Doppler shift of a target, a derived cost function is usually evaluated and optimized over a grid of points. The performance of such algorithms

17. Development of simple kinetic models and parameter estimation for ...

African Journals Online (AJOL)

PANCHIGA

2016-09-28

Sep 28, 2016 ... estimation for simulation of recombinant human serum albumin ... and recombinant protein production by P. pastoris without requiring complex models. Key words: ..... SDS-PAGE and showed the same molecular size as.

18. The effect of selection on genetic parameter estimates

African Journals Online (AJOL)

Unknown

The South African Journal of Animal Science is available online at ... A simulation study was carried out to investigate the effect of selection on the estimation of genetic ... The model contained a fixed effect, random genetic and random.

19. (Co) variance Components and Genetic Parameter Estimates for Re

African Journals Online (AJOL)

Mapula

The magnitude of heritability estimates obtained in the current study ... traits were recently introduced to supplement progeny testing programmes or for usage as sole source of ..... VCE-5 User's Guide and Reference Manual Version 5.1.

20. Repetitive Identification of Structural Systems Using a Nonlinear Model Parameter Refinement Approach

Directory of Open Access Journals (Sweden)

Jeng-Wen Lin

2009-01-01

Full Text Available This paper proposes a statistical confidence interval based nonlinear model parameter refinement approach for the health monitoring of structural systems subjected to seismic excitations. The developed model refinement approach uses the 95% confidence interval of the estimated structural parameters to determine their statistical significance in a least-squares regression setting. When the parameters' confidence interval covers the zero value, it is statistically sustainable to truncate such parameters. The remaining parameters will repetitively undergo such parameter sifting process for model refinement until all the parameters' statistical significance cannot be further improved. This newly developed model refinement approach is implemented for the series models of multivariable polynomial expansions: the linear, the Taylor series, and the power series model, leading to a more accurate identification as well as a more controllable design for system vibration control. Because the statistical regression based model refinement approach is intrinsically used to process a “batch” of data and obtain an ensemble average estimation such as the structural stiffness, the Kalman filter and one of its extended versions is introduced to the refined power series model for structural health monitoring.

1. The study of influence of relevant physical parameters variations on the estimates of the effective doses of Rn-222

International Nuclear Information System (INIS)

Ridzikova, A.; Fronka, A.; Moucka, L.

2004-01-01

Based on the analysis of 12 weekly continuous measurements and 4 integral measurements performed in different seasons in actual apartment rooms, bedrooms in particular, we attempted to identify the uncertainties that are involved in the estimation of radiation doses to lung tissues. We found that the parameters of time of residence, concentration, and equilibrium factor can affect substantially the estimate of the overall early effective dose. The weekly averaged concentration measured in one term is not sufficient for a fairly accurate estimate; actually, the equilibrium factor f must also be known and the actual real individual time of residence must be estimated if we want to adopt this approach to the dose estimation

2. Using Mathematical Modeling Methods for Estimating Entrance Flow Heterogeneity Impact on Aviation GTE Parameters and Performances

Directory of Open Access Journals (Sweden)

Yu. A. Ezrokhi

2017-01-01

Full Text Available The paper considers methodological approaches to the mathematical models (MM of various levels, dedicated to estimate an impact of the entrance flow heterogeneity on the main parameters and performances of the aviation GTE and it units. By an example of calculation of a twin-shaft turbofan engine in cruiser mode, demonstrates engineering mathematical model capabilities to define the impact of the total pressure field distortion on engine trust and air flow parameters, and also gas dynamic stability margin of the both compressors.It is shown that the presented first level mathematical model allows us to estimate sufficiently the impact of entrance total pressure heterogeneity on the engine parameters. Here reliability of calculations is proved to be true by their comparison with the results, obtained owing to well fulfilled 2D & 3D mathematical models of the engine, which have been repeatedly identified by the results of experiments.It is shown that received results including those on decreasing values of stability margin of both compressors can be used for tentative estimates when choosing a desirable stability margin, providing steady operation of compressors and engine in an entire range of its operating modes. Carrying out a definitive testing calculation using the specialized engine MM of a higher level will not only confirm the results obtained, but also reduce their expected error with regard to the real values reached as a result of tests.

3. Multi-objective genetic algorithm parameter estimation in a reduced nuclear reactor model

Energy Technology Data Exchange (ETDEWEB)

Marseguerra, M.; Zio, E.; Canetta, R. [Polytechnic of Milan, Dept. of Nuclear Engineering, Milano (Italy)

2005-07-01

The fast increase in computing power has rendered, and will continue to render, more and more feasible the incorporation of dynamics in the safety and reliability models of complex engineering systems. In particular, the Monte Carlo simulation framework offers a natural environment for estimating the reliability of systems with dynamic features. However, the time-integration of the dynamic processes may render the Monte Carlo simulation quite burdensome so that it becomes mandatory to resort to validated, simplified models of process evolution. Such models are typically based on lumped effective parameters whose values need to be suitably estimated so as to best fit to the available plant data. In this paper we propose a multi-objective genetic algorithm approach for the estimation of the effective parameters of a simplified model of nuclear reactor dynamics. The calibration of the effective parameters is achieved by best fitting the model responses of the quantities of interest to the actual evolution profiles. A case study is reported in which the real reactor is simulated by the QUAndry based Reactor Kinetics (Quark) code available from the Nuclear Energy Agency and the simplified model is based on the point kinetics approximation to describe the neutron balance in the core and on thermal equilibrium relations to describe the energy exchange between the different loops. (authors)

4. Multi-objective genetic algorithm parameter estimation in a reduced nuclear reactor model

International Nuclear Information System (INIS)

Marseguerra, M.; Zio, E.; Canetta, R.

2005-01-01

The fast increase in computing power has rendered, and will continue to render, more and more feasible the incorporation of dynamics in the safety and reliability models of complex engineering systems. In particular, the Monte Carlo simulation framework offers a natural environment for estimating the reliability of systems with dynamic features. However, the time-integration of the dynamic processes may render the Monte Carlo simulation quite burdensome so that it becomes mandatory to resort to validated, simplified models of process evolution. Such models are typically based on lumped effective parameters whose values need to be suitably estimated so as to best fit to the available plant data. In this paper we propose a multi-objective genetic algorithm approach for the estimation of the effective parameters of a simplified model of nuclear reactor dynamics. The calibration of the effective parameters is achieved by best fitting the model responses of the quantities of interest to the actual evolution profiles. A case study is reported in which the real reactor is simulated by the QUAndry based Reactor Kinetics (Quark) code available from the Nuclear Energy Agency and the simplified model is based on the point kinetics approximation to describe the neutron balance in the core and on thermal equilibrium relations to describe the energy exchange between the different loops. (authors)

5. Empirical estimation of school siting parameter towards improving children's safety

Science.gov (United States)

Aziz, I. S.; Yusoff, Z. M.; Rasam, A. R. A.; Rahman, A. N. N. A.; Omar, D.

2014-02-01

Distance from school to home is a key determination in ensuring the safety of hildren. School siting parameters are made to make sure that a particular school is located in a safe environment. School siting parameters are made by Department of Town and Country Planning Malaysia (DTCP) and latest review was on June 2012. These school siting parameters are crucially important as they can affect the safety, school reputation, and not to mention the perception of the pupil and parents of the school. There have been many studies to review school siting parameters since these change in conjunction with this ever-changing world. In this study, the focus is the impact of school siting parameter on people with low income that live in the urban area, specifically in Johor Bahru, Malaysia. In achieving that, this study will use two methods which are on site and off site. The on site method is to give questionnaires to people and off site is to use Geographic Information System (GIS) and Statistical Product and Service Solutions (SPSS), to analyse the results obtained from the questionnaire. The output is a maps of suitable safe distance from school to house. The results of this study will be useful to people with low income as their children tend to walk to school rather than use transportation.

6. Site characterization: a spatial estimation approach

International Nuclear Information System (INIS)

Candy, J.V.; Mao, N.

1980-10-01

In this report the application of spatial estimation techniques or kriging to groundwater aquifers and geological borehole data is considered. The adequacy of these techniques to reliably develop contour maps from various data sets is investigated. The estimator is developed theoretically in a simplified fashion using vector-matrix calculus. The practice of spatial estimation is discussed and the estimator is then applied to two groundwater aquifer systems and used also to investigate geological formations from borehole data. It is shown that the estimator can provide reasonable results when designed properly

7. Structural observability analysis and EKF based parameter estimation of building heating models

Directory of Open Access Journals (Sweden)

D.W.U. Perera

2016-07-01

Full Text Available Research for enhanced energy-efficient buildings has been given much recognition in the recent years owing to their high energy consumptions. Increasing energy needs can be precisely controlled by practicing advanced controllers for building Heating, Ventilation, and Air-Conditioning (HVAC systems. Advanced controllers require a mathematical building heating model to operate, and these models need to be accurate and computationally efficient. One main concern associated with such models is the accurate estimation of the unknown model parameters. This paper presents the feasibility of implementing a simplified building heating model and the computation of physical parameters using an off-line approach. Structural observability analysis is conducted using graph-theoretic techniques to analyze the observability of the developed system model. Then Extended Kalman Filter (EKF algorithm is utilized for parameter estimates using the real measurements of a single-zone building. The simulation-based results confirm that even with a simple model, the EKF follows the state variables accurately. The predicted parameters vary depending on the inputs and disturbances.

8. Inference of reactive transport model parameters using a Bayesian multivariate approach

Science.gov (United States)

Carniato, Luca; Schoups, Gerrit; van de Giesen, Nick

2014-08-01

Parameter estimation of subsurface transport models from multispecies data requires the definition of an objective function that includes different types of measurements. Common approaches are weighted least squares (WLS), where weights are specified a priori for each measurement, and weighted least squares with weight estimation (WLS(we)) where weights are estimated from the data together with the parameters. In this study, we formulate the parameter estimation task as a multivariate Bayesian inference problem. The WLS and WLS(we) methods are special cases in this framework, corresponding to specific prior assumptions about the residual covariance matrix. The Bayesian perspective allows for generalizations to cases where residual correlation is important and for efficient inference by analytically integrating out the variances (weights) and selected covariances from the joint posterior. Specifically, the WLS and WLS(we) methods are compared to a multivariate (MV) approach that accounts for specific residual correlations without the need for explicit estimation of the error parameters. When applied to inference of reactive transport model parameters from column-scale data on dissolved species concentrations, the following results were obtained: (1) accounting for residual correlation between species provides more accurate parameter estimation for high residual correlation levels whereas its influence for predictive uncertainty is negligible, (2) integrating out the (co)variances leads to an efficient estimation of the full joint posterior with a reduced computational effort compared to the WLS(we) method, and (3) in the presence of model structural errors, none of the methods is able to identify the correct parameter values.

9. Uncertainties in the Item Parameter Estimates and Robust Automated Test Assembly

Science.gov (United States)

Veldkamp, Bernard P.; Matteucci, Mariagiulia; de Jong, Martijn G.

2013-01-01

Item response theory parameters have to be estimated, and because of the estimation process, they do have uncertainty in them. In most large-scale testing programs, the parameters are stored in item banks, and automated test assembly algorithms are applied to assemble operational test forms. These algorithms treat item parameters as fixed values,…

10. ESTIMATION OF CONSTANT AND TIME-VARYING DYNAMIC PARAMETERS OF HIV INFECTION IN A NONLINEAR DIFFERENTIAL EQUATION MODEL.

Science.gov (United States)

Liang, Hua; Miao, Hongyu; Wu, Hulin

2010-03-01

Modeling viral dynamics in HIV/AIDS studies has resulted in deep understanding of pathogenesis of HIV infection from which novel antiviral treatment guidance and strategies have been derived. Viral dynamics models based on nonlinear differential equations have been proposed and well developed over the past few decades. However, it is quite challenging to use experimental or clinical data to estimate the unknown parameters (both constant and time-varying parameters) in complex nonlinear differential equation models. Therefore, investigators usually fix some parameter values, from the literature or by experience, to obtain only parameter estimates of interest from clinical or experimental data. However, when such prior information is not available, it is desirable to determine all the parameter estimates from data. In this paper, we intend to combine the newly developed approaches, a multi-stage smoothing-based (MSSB) method and the spline-enhanced nonlinear least squares (SNLS) approach, to estimate all HIV viral dynamic parameters in a nonlinear differential equation model. In particular, to the best of our knowledge, this is the first attempt to propose a comparatively thorough procedure, accounting for both efficiency and accuracy, to rigorously estimate all key kinetic parameters in a nonlinear differential equation model of HIV dynamics from clinical data. These parameters include the proliferation rate and death rate of uninfected HIV-targeted cells, the average number of virions produced by an infected cell, and the infection rate which is related to the antiviral treatment effect and is time-varying. To validate the estimation methods, we verified the identifiability of the HIV viral dynamic model and performed simulation studies. We applied the proposed techniques to estimate the key HIV viral dynamic parameters for two individual AIDS patients treated with antiretroviral therapies. We demonstrate that HIV viral dynamics can be well characterized and

11. Efficient estimates of cochlear hearing loss parameters in individual listeners

DEFF Research Database (Denmark)

Fereczkowski, Michal; Jepsen, Morten Løve; Dau, Torsten

2013-01-01

It has been suggested that the level corresponding to the knee-point of the basilar membrane (BM) input/output (I/O) function can be used to estimate the amount of inner- and outer hair-cell loss (IHL, OHL) in listeners with a moderate cochlear hearing impairment Plack et al. (2004). According...... to Jepsen and Dau (2011) IHL + OHL = HLT [dB], where HLT stands for total hearing loss. Hence having estimates of the total hearing loss and OHC loss, one can estimate the IHL. In the present study, results from forward masking experiments based on temporal masking curves (TMC; Nelson et al., 2001...... estimates of the knee-point level. Further, it is explored whether it is possible to estimate the compression ratio using only on-frequency TMCs. 10 normal-hearing and 10 hearing-impaired listeners (with mild-to-moderate sensorineural hearing loss) were tested at 1, 2 and 4 kHz. The results showed...

12. Real-Time Aerodynamic Parameter Estimation without Air Flow Angle Measurements

Science.gov (United States)

Morelli, Eugene A.

2010-01-01

A technique for estimating aerodynamic parameters in real time from flight data without air flow angle measurements is described and demonstrated. The method is applied to simulated F-16 data, and to flight data from a subscale jet transport aircraft. Modeling results obtained with the new approach using flight data without air flow angle measurements were compared to modeling results computed conventionally using flight data that included air flow angle measurements. Comparisons demonstrated that the new technique can provide accurate aerodynamic modeling results without air flow angle measurements, which are often difficult and expensive to obtain. Implications for efficient flight testing and flight safety are discussed.

13. Inference of reactive transport model parameters using a Bayesian multivariate approach

NARCIS (Netherlands)

Carniato, L.; Schoups, G.H.W.; Van de Giesen, N.C.

2014-01-01

Parameter estimation of subsurface transport models from multispecies data requires the definition of an objective function that includes different types of measurements. Common approaches are weighted least squares (WLS), where weights are specified a priori for each measurement, and weighted least

14. Estimating Propensity Parameters Using Google PageRank and Genetic Algorithms.

Science.gov (United States)

Murrugarra, David; Miller, Jacob; Mueller, Alex N

2016-01-01

Stochastic Boolean networks, or more generally, stochastic discrete networks, are an important class of computational models for molecular interaction networks. The stochasticity stems from the updating schedule. Standard updating schedules include the synchronous update, where all the nodes are updated at the same time, and the asynchronous update where a random node is updated at each time step. The former produces a deterministic dynamics while the latter a stochastic dynamics. A more general stochastic setting considers propensity parameters for updating each node. Stochastic Discrete Dynamical Systems (SDDS) are a modeling framework that considers two propensity parameters for updating each node and uses one when the update has a positive impact on the variable, that is, when the update causes the variable to increase its value, and uses the other when the update has a negative impact, that is, when the update causes it to decrease its value. This framework offers additional features for simulations but also adds a complexity in parameter estimation of the propensities. This paper presents a method for estimating the propensity parameters for SDDS. The method is based on adding noise to the system using the Google PageRank approach to make the system ergodic and thus guaranteeing the existence of a stationary distribution. Then with the use of a genetic algorithm, the propensity parameters are estimated. Approximation techniques that make the search algorithms efficient are also presented and Matlab/Octave code to test the algorithms are available at http://www.ms.uky.edu/~dmu228/GeneticAlg/Code.html.

15. Estimating Propensity Parameters using Google PageRank and Genetic Algorithms

Directory of Open Access Journals (Sweden)

David Murrugarra

2016-11-01

Full Text Available Stochastic Boolean networks, or more generally, stochastic discrete networks, are an important class of computational models for molecular interaction networks. The stochasticity stems from the updating schedule. Standard updating schedules include the synchronous update, where all the nodes are updated at the same time, and the asynchronous update where a random node is updated at each time step. The former produces a deterministic dynamics while the latter a stochastic dynamics. A more general stochastic setting considers propensity parameters for updating each node. Stochastic Discrete Dynamical Systems (SDDS are a modeling framework that considers two propensity parameters for updating each node and uses one when the update has a positive impact on the variable, that is, when the update causes the variable to increase its value, and uses the other when the update has a negative impact, that is, when the update causes it to decrease its value. This framework offers additional features for simulations but also adds a complexity in parameter estimation of the propensities. This paper presents a method for estimating the propensity parameters for SDDS. The method is based on adding noise to the system using the Google PageRank approach to make the system ergodic and thus guaranteeing the existence of a stationary distribution. Then with the use of a genetic algorithm, the propensity parameters are estimated. Approximation techniques that make the search algorithms efficient are also presented and Matlab/Octave code to test the algorithms are available at~href{http://www.ms.uky.edu/~dmu228/GeneticAlg/Code.html}{http://www.ms.uky.edu/$sim$dmu228GeneticAlgCode.html}.

16. Sugarcane maturity estimation through edaphic-climatic parameters

Directory of Open Access Journals (Sweden)

Scarpari Maximiliano Salles

2004-01-01

Full Text Available Sugarcane (Saccharum officinarum L. grows under different weather conditions directly affecting crop maturation. Raw material quality predicting models are important tools in sugarcane crop management; the goal of these models is to provide productivity estimates during harvesting, increasing the efficiency of strategical and administrative decisions. The objective of this work was developing a model to predict Total Recoverable Sugars (TRS during harvesting, using data related to production factors such as soil water storage and negative degree-days. The database of a sugar mill for the crop seasons 1999/2000, 2000/2001 and 2001/2002 was analyzed, and statistical models were tested to estimate raw material. The maturity model for a one-year old sugarcane proved to be significant, with a coefficient of determination (R² of 0.7049*. No differences were detected between measured and estimated data in the simulation (P < 0.05.

17. An Introduction to Goodness of Fit for PMU Parameter Estimation

Energy Technology Data Exchange (ETDEWEB)

Riepnieks, Artis; Kirkham, Harold

2017-10-01

New results of measurements of phasor-like signals are presented based on our previous work on the topic. In this document an improved estimation method is described. The algorithm (which is realized in MATLAB software) is discussed. We examine the effect of noisy and distorted signals on the Goodness of Fit metric. The estimation method is shown to be performing very well with clean data and with a measurement window as short as a half a cycle and as few as 5 samples per cycle. The Goodness of Fit decreases predictably with added phase noise, and seems to be acceptable even with visible distortion in the signal. While the exact results we obtain are specific to our method of estimation, the Goodness of Fit method could be implemented in any phasor measurement unit.

18. Response-Based Estimation of Sea State Parameters

DEFF Research Database (Denmark)

Nielsen, Ulrik Dam

2007-01-01

of measured ship responses. It is therefore interesting to investigate how the filtering aspect, introduced by FRF, affects the final outcome of the estimation procedures. The paper contains a study based on numerical generated time series, and the study shows that filtering has an influence...... calculated by a 3-D time domain code and by closed-form (analytical) expressions, respectively. Based on comparisons with wave radar measurements and satellite measurements it is seen that the wave estimations based on closedform expressions exhibit a reasonable energy content, but the distribution of energy...

19. Application of Parameter Estimation for Diffusions and Mixture Models

DEFF Research Database (Denmark)

Nolsøe, Kim

The first part of this thesis proposes a method to determine the preferred number of structures, their proportions and the corresponding geometrical shapes of an m-membered ring molecule. This is obtained by formulating a statistical model for the data and constructing an algorithm which samples...... with the posterior score function. From an application point of view this methology is easy to apply, since the optimal estimating function G(;Xt1 ; : : : ;Xtn ) is equal to the classical optimal estimating function, plus a correction term which takes into account the prior information. The methology is particularly...

20. Estimation of beech pyrolysis kinetic parameters by Shuffled Complex Evolution.

Science.gov (United States)

Ding, Yanming; Wang, Changjian; Chaos, Marcos; Chen, Ruiyu; Lu, Shouxiang

2016-01-01

The pyrolysis kinetics of a typical biomass energy feedstock, beech, was investigated based on thermogravimetric analysis over a wide heating rate range from 5K/min to 80K/min. A three-component (corresponding to hemicellulose, cellulose and lignin) parallel decomposition reaction scheme was applied to describe the experimental data. The resulting kinetic reaction model was coupled to an evolutionary optimization algorithm (Shuffled Complex Evolution, SCE) to obtain model parameters. To the authors' knowledge, this is the first study in which SCE has been used in the context of thermogravimetry. The kinetic parameters were simultaneously optimized against data for 10, 20 and 60K/min heating rates, providing excellent fits to experimental data. Furthermore, it was shown that the optimized parameters were applicable to heating rates (5 and 80K/min) beyond those used to generate them. Finally, the predicted results based on optimized parameters were contrasted with those based on the literature. Copyright © 2015 Elsevier Ltd. All rights reserved.

1. LIKELIHOOD ESTIMATION OF PARAMETERS USING SIMULTANEOUSLY MONITORED PROCESSES

DEFF Research Database (Denmark)

Friis-Hansen, Peter; Ditlevsen, Ove Dalager

2004-01-01

The topic is maximum likelihood inference from several simultaneously monitored response processes of a structure to obtain knowledge about the parameters of other not monitored but important response processes when the structure is subject to some Gaussian load field in space and time. The consi....... The considered example is a ship sailing with a given speed through a Gaussian wave field....

2. parameter extraction and estimation based on the pv panel outdoor

African Journals Online (AJOL)

userpc

The five parameters in Equation (1) depend on the incident solar irradiance, the cell temperature, and on their reference values. These reference values are generally provided by manufacturers of PV modules for specified operating condition such as STC (Standard Test Conditions) for which the irradiance is 1000 and the.

3. Unconstrained parameter estimation for assessment of dynamic cerebral autoregulation

International Nuclear Information System (INIS)

Chacón, M; Nuñez, N; Henríquez, C; Panerai, R B

2008-01-01

Measurement of dynamic cerebral autoregulation (CA), the transient response of cerebral blood flow (CBF) to changes in arterial blood pressure (ABP), has been performed with an index of autoregulation (ARI), related to the parameters of a second-order differential equation model, namely gain (K), damping factor (D) and time constant (T). Limitations of the ARI were addressed by increasing its numerical resolution and generalizing the parameter space. In 16 healthy subjects, recordings of ABP (Finapres) and CBF velocity (ultrasound Doppler) were performed at rest, before, during and after 5% CO 2 breathing, and for six repeated thigh cuff maneuvers. The unconstrained model produced lower predictive error (p < 0.001) than the original model. Unconstrained parameters (K'–D'–T') were significantly different from K–D–T but were still sensitive to different measurement conditions, such as the under-regulation induced by hypercapnia. The intra-subject variability of K' was significantly lower than that of the ARI and this parameter did not show the unexpected occurrences of zero values as observed with the ARI and the classical value of K. These results suggest that K' could be considered as a more stable and reliable index of dynamic autoregulation than ARI. Further studies are needed to validate this new index under different clinical conditions

4. Measurement Error Estimation for Capacitive Voltage Transformer by Insulation Parameters

Directory of Open Access Journals (Sweden)

Bin Chen

2017-03-01

Full Text Available Measurement errors of a capacitive voltage transformer (CVT are relevant to its equivalent parameters for which its capacitive divider contributes the most. In daily operation, dielectric aging, moisture, dielectric breakdown, etc., it will exert mixing effects on a capacitive divider’s insulation characteristics, leading to fluctuation in equivalent parameters which result in the measurement error. This paper proposes an equivalent circuit model to represent a CVT which incorporates insulation characteristics of a capacitive divider. After software simulation and laboratory experiments, the relationship between measurement errors and insulation parameters is obtained. It indicates that variation of insulation parameters in a CVT will cause a reasonable measurement error. From field tests and calculation, equivalent capacitance mainly affects magnitude error, while dielectric loss mainly affects phase error. As capacitance changes 0.2%, magnitude error can reach −0.2%. As dielectric loss factor changes 0.2%, phase error can reach 5′. An increase of equivalent capacitance and dielectric loss factor in the high-voltage capacitor will cause a positive real power measurement error. An increase of equivalent capacitance and dielectric loss factor in the low-voltage capacitor will cause a negative real power measurement error.

5. Estimation of Aerodynamic Parameters in Conditions of Measurement

Directory of Open Access Journals (Sweden)

Htang Om Moung

2017-01-01

Full Text Available The paper discusses the problem of aircraft parameter identification in conditions of measurement noises. It is assumed that all the signals involved into the process of identification are subjects to measurement noises, that is measurement random errors normally distributed. The results of simulation are presented which show the relation between the noises standard deviations and the accuracy of identification.

6. A general method of estimating stellar astrophysical parameters from photometry

NARCIS (Netherlands)

Belikov, A. N.; Roeser, S.

2008-01-01

Context. Applying photometric catalogs to the study of the population of the Galaxy is obscured by the impossibility to map directly photometric colors into astrophysical parameters. Most of all-sky catalogs like ASCC or 2MASS are based upon broad-band photometric systems, and the use of broad

7. Hierarchical Bayesian parameter estimation for cumulative prospect theory

NARCIS (Netherlands)

Nilsson, H.; Rieskamp, J.; Wagenmakers, E.-J.

2011-01-01

Cumulative prospect theory (CPT Tversky & Kahneman, 1992) has provided one of the most influential accounts of how people make decisions under risk. CPT is a formal model with parameters that quantify psychological processes such as loss aversion, subjective values of gains and losses, and

8. Parameter estimation in stochastic mammogram model by heuristic optimization techniques.

NARCIS (Netherlands)

Selvan, S.E.; Xavier, C.C.; Karssemeijer, N.; Sequeira, J.; Cherian, R.A.; Dhala, B.Y.

2006-01-01

The appearance of disproportionately large amounts of high-density breast parenchyma in mammograms has been found to be a strong indicator of the risk of developing breast cancer. Hence, the breast density model is popular for risk estimation or for monitoring breast density change in prevention or

9. EVALUATING SOIL EROSION PARAMETER ESTIMATES FROM DIFFERENT DATA SOURCES

Science.gov (United States)

Topographic factors and soil loss estimates that were derived from thee data sources (STATSGO, 30-m DEM, and 3-arc second DEM) were compared. Slope magnitudes derived from the three data sources were consistently different. Slopes from the DEMs tended to provide a flattened sur...

10. Online Parameter Estimation for a Centrifugal Decanter System

DEFF Research Database (Denmark)

Larsen, Jesper Abildgaard; Alstrøm, Preben

2014-01-01

In many processing plants decanter systems are used for separation of heterogenious mixtures, and even though they account for a large fraction of the energy consumption, most decanters just runs at a fixed setpoint. Here, multi model estimation is applied to a waste water treatment plant, and it...

11. Estimates of selection parameters in protein mutants of spring barley

International Nuclear Information System (INIS)

Gaul, H.; Walther, H.; Seibold, K.H.; Brunner, H.; Mikaelsen, K.

1976-01-01

Detailed studies have been made with induced protein mutants regarding a possible genetic advance in selection including the estimation of the genetic variation and heritability coefficients. Estimates were obtained for protein content and protein yield. The variation of mutant lines in different environments was found to be many times as large as the variation of the line means. The detection of improved protein mutants seems therefore possible only in trials with more than one environment. The heritability of protein content and protein yield was estimated in different sets of environments and was found to be low. However, higher values were found with an increasing number of environments. At least four environments seem to be necessary to obtain reliable heritability estimates. The geneticall component of the variation between lines was significant for protein content in all environmental combinations. For protein yield some environmental combinations only showed significant differences. The expected genetic advance with one selection step was small for both protein traits. Genetically significant differences between protein micromutants give, however, a first indication that selection among protein mutants with small differences seems also possible. (author)

12. On Structure, Family and Parameter Estimation of Hierarchical Archimedean Copulas

Czech Academy of Sciences Publication Activity Database

Górecki, J.; Hofert, M.; Holeňa, Martin

2017-01-01

Roč. 87, č. 17 (2017), s. 3261-3324 ISSN 0094-9655 R&D Projects: GA ČR GA17-01251S Institutional support: RVO:67985807 Keywords : copula estimation * goodness-of-fit * Hierarchical Archimedean copula * structure determination Subject RIV: IN - Informatics, Computer Science OBOR OECD: Statistics and probability Impact factor: 0.757, year: 2016

13. Estimation of reservoir parameter using a hybrid neural network

Energy Technology Data Exchange (ETDEWEB)

Aminzadeh, F. [FACT, Suite 201-225, 1401 S.W. FWY Sugarland, TX (United States); Barhen, J.; Glover, C.W. [Center for Engineering Systems Advanced Research, Oak Ridge National Laboratory, Oak Ridge, TN (United States); Toomarian, N.B. [Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA (United States)

1999-11-01

Estimation of an oil field's reservoir properties using seismic data is a crucial issue. The accuracy of those estimates and the associated uncertainty are also important information. This paper demonstrates the use of the k-fold cross validation technique to obtain confidence bound on an Artificial Neural Network's (ANN) accuracy statistic from a finite sample set. In addition, we also show that an ANN's classification accuracy is dramatically improved by transforming the ANN's input feature space to a dimensionally smaller, new input space. The new input space represents a feature space that maximizes the linear separation between classes. Thus, the ANN's convergence time and accuracy are improved because the ANN must merely find nonlinear perturbations to the starting linear decision boundaries. These technique for estimating ANN accuracy bounds and feature space transformations are demonstrated on the problem of estimating the sand thickness in an oil field reservoir based only on remotely sensed seismic data.

14. Parameter Estimation and Model Selection for Mixtures of Truncated Exponentials

DEFF Research Database (Denmark)

Langseth, Helge; Nielsen, Thomas Dyhre; Rumí, Rafael

2010-01-01

Bayesian networks with mixtures of truncated exponentials (MTEs) support efficient inference algorithms and provide a flexible way of modeling hybrid domains (domains containing both discrete and continuous variables). On the other hand, estimating an MTE from data has turned out to be a difficul...

15. Parameters estimation for X-ray sources: positions

International Nuclear Information System (INIS)

Avni, Y.

1977-01-01

It is shown that the sizes of the positional error boxes for x-ray sources can be determined by using an estimation method which we have previously formulated generally and applied in spectral analyses. It is explained how this method can be used by scanning x-ray telescopes, by rotating modulation collimators, and by HEAO-A (author)

16. Parameter estimation of electricity spot models from futures prices

NARCIS (Netherlands)

Aihara, ShinIchi; Bagchi, Arunabha; Imreizeeq, E.S.N.; Walter, E.

We consider a slight perturbation of the Schwartz-Smith model for the electricity futures prices and the resulting modified spot model. Using the martingale property of the modified price under the risk neutral measure, we derive the arbitrage free model for the spot and futures prices. We estimate

17. Estimation of fracture parameters using elastic full-waveform inversion

KAUST Repository

Zhang, Zhendong; Alkhalifah, Tariq Ali; Oh, Juwon; Tsvankin, Ilya

2017-01-01

regularization term is added to the objective function to improve the estimation of the fracture azimuth, which is otherwise poorly constrained. The cracks are assumed to be penny-shaped to reduce the nonuniqueness in the inverted fracture weaknesses and achieve

18. Bayesian Estimation of Two-Parameter Weibull Distribution Using Extension of Jeffreys' Prior Information with Three Loss Functions

Directory of Open Access Journals (Sweden)

Chris Bambey Guure

2012-01-01

Full Text Available The Weibull distribution has been observed as one of the most useful distribution, for modelling and analysing lifetime data in engineering, biology, and others. Studies have been done vigorously in the literature to determine the best method in estimating its parameters. Recently, much attention has been given to the Bayesian estimation approach for parameters estimation which is in contention with other estimation methods. In this paper, we examine the performance of maximum likelihood estimator and Bayesian estimator using extension of Jeffreys prior information with three loss functions, namely, the linear exponential loss, general entropy loss, and the square error loss function for estimating the two-parameter Weibull failure time distribution. These methods are compared using mean square error through simulation study with varying sample sizes. The results show that Bayesian estimator using extension of Jeffreys' prior under linear exponential loss function in most cases gives the smallest mean square error and absolute bias for both the scale parameter α and the shape parameter β for the given values of extension of Jeffreys' prior.

19. Asymptotic analysis of the role of spatial sampling for covariance parameter estimation of Gaussian processes

International Nuclear Information System (INIS)

Bachoc, Francois

2014-01-01

Covariance parameter estimation of Gaussian processes is analyzed in an asymptotic framework. The spatial sampling is a randomly perturbed regular grid and its deviation from the perfect regular grid is controlled by a single scalar regularity parameter. Consistency and asymptotic normality are proved for the Maximum Likelihood and Cross Validation estimators of the covariance parameters. The asymptotic covariance matrices of the covariance parameter estimators are deterministic functions of the regularity parameter. By means of an exhaustive study of the asymptotic covariance matrices, it is shown that the estimation is improved when the regular grid is strongly perturbed. Hence, an asymptotic confirmation is given to the commonly admitted fact that using groups of observation points with small spacing is beneficial to covariance function estimation. Finally, the prediction error, using a consistent estimator of the covariance parameters, is analyzed in detail. (authors)

20. A Generalized Estimating Equations Approach to Model Heterogeneity and Time Dependence in Capture-Recapture Studies

Directory of Open Access Journals (Sweden)

Akanda Md. Abdus Salam

2017-03-01

Full Text Available Individual heterogeneity in capture probabilities and time dependence are fundamentally important for estimating the closed animal population parameters in capture-recapture studies. A generalized estimating equations (GEE approach accounts for linear correlation among capture-recapture occasions, and individual heterogeneity in capture probabilities in a closed population capture-recapture individual heterogeneity and time variation model. The estimated capture probabilities are used to estimate animal population parameters. Two real data sets are used for illustrative purposes. A simulation study is carried out to assess the performance of the GEE estimator. A Quasi-Likelihood Information Criterion (QIC is applied for the selection of the best fitting model. This approach performs well when the estimated population parameters depend on the individual heterogeneity and the nature of linear correlation among capture-recapture occasions.