Singularity of Some Software Reliability Models and Parameter Estimation Method
无
2000-01-01
According to the principle, “The failure data is the basis of software reliability analysis”, we built a software reliability expert system (SRES) by adopting the artificial intelligence technology. By reasoning out the conclusion from the fitting results of failure data of a software project, the SRES can recommend users “the most suitable model” as a software reliability measurement model. We believe that the SRES can overcome the inconsistency in applications of software reliability models well. We report investigation results of singularity and parameter estimation methods of experimental models in SRES.
Parameter estimation and reliable fault detection of electric motors
Dusan PROGOVAC; Le Yi WANG; George YIN
2014-01-01
Accurate model identification and fault detection are necessary for reliable motor control. Motor-characterizing parameters experience substantial changes due to aging, motor operating conditions, and faults. Consequently, motor parameters must be estimated accurately and reliably during operation. Based on enhanced model structures of electric motors that accommodate both normal and faulty modes, this paper introduces bias-corrected least-squares (LS) estimation algorithms that incorporate functions for correcting estimation bias, forgetting factors for capturing sudden faults, and recursive structures for efficient real-time implementation. Permanent magnet motors are used as a benchmark type for concrete algorithm development and evaluation. Algorithms are presented, their properties are established, and their accuracy and robustness are evaluated by simulation case studies under both normal operations and inter-turn winding faults. Implementation issues from different motor control schemes are also discussed.
Terry, Leann; Kelley, Ken
2012-11-01
Composite measures play an important role in psychology and related disciplines. Composite measures almost always have error. Correspondingly, it is important to understand the reliability of the scores from any particular composite measure. However, the point estimates of the reliability of composite measures are fallible and thus all such point estimates should be accompanied by a confidence interval. When confidence intervals are wide, there is much uncertainty in the population value of the reliability coefficient. Given the importance of reporting confidence intervals for estimates of reliability, coupled with the undesirability of wide confidence intervals, we develop methods that allow researchers to plan sample size in order to obtain narrow confidence intervals for population reliability coefficients. We first discuss composite reliability coefficients and then provide a discussion on confidence interval formation for the corresponding population value. Using the accuracy in parameter estimation approach, we develop two methods to obtain accurate estimates of reliability by planning sample size. The first method provides a way to plan sample size so that the expected confidence interval width for the population reliability coefficient is sufficiently narrow. The second method ensures that the confidence interval width will be sufficiently narrow with some desired degree of assurance (e.g., 99% assurance that the 95% confidence interval for the population reliability coefficient will be less than W units wide). The effectiveness of our methods was verified with Monte Carlo simulation studies. We demonstrate how to easily implement the methods with easy-to-use and freely available software. ©2011 The British Psychological Society.
Anupam Pathak
2014-11-01
Full Text Available Abstract: Problem Statement: The two-parameter exponentiated Rayleigh distribution has been widely used especially in the modelling of life time event data. It provides a statistical model which has a wide variety of application in many areas and the main advantage is its ability in the context of life time event among other distributions. The uniformly minimum variance unbiased and maximum likelihood estimation methods are the way to estimate the parameters of the distribution. In this study we explore and compare the performance of the uniformly minimum variance unbiased and maximum likelihood estimators of the reliability function R(t=P(X>t and P=P(X>Y for the two-parameter exponentiated Rayleigh distribution. Approach: A new technique of obtaining these parametric functions is introduced in which major role is played by the powers of the parameter(s and the functional forms of the parametric functions to be estimated are not needed. We explore the performance of these estimators numerically under varying conditions. Through the simulation study a comparison are made on the performance of these estimators with respect to the Biasness, Mean Square Error (MSE, 95% confidence length and corresponding coverage percentage. Conclusion: Based on the results of simulation study the UMVUES of R(t and ‘P’ for the two-parameter exponentiated Rayleigh distribution found to be superior than MLES of R(t and ‘P’.
J. Arora
2014-09-01
Full Text Available Dental ageing is important in medico legal cases when teeth are the only material available to the investigating agencies for identification of the deceased. Attrition, which is the wear of occlusal surface of tooth (a physiological change; can be used as a determinant parameter for this purpose. The present study has been undertaken to examine the reliability of attrition as a sole parameter for age estimation among North Western adult Indians. 109 (43males, 66 females single rooted freshly extracted teeth ranging in age from 18-75years were studied. Teeth were fixed, cleaned and sectioned labiolingually upto thickness of 1mm. Sections were then mounted and attrition was graded from 0-3 according to Gustafson’s method. Scores were subjected to regression equation to estimate age of an individual. Results of the present study revealed that this parameter is reliable in individuals of ≤ 60 years with an error of ±10years. However, periodontal disease severely affected the accuracy of age estimation from this parameter as is evident from the results. Statistically no significant difference was noted in absolute mean error of age in different age groups. No significant difference was observed in absolute mean error of age in both the sexes.
Alaa F. Sheta
2016-04-01
Full Text Available In this age of technology, building quality software is essential to competing in the business market. One of the major principles required for any quality and business software product for value fulfillment is reliability. Estimating software reliability early during the software development life cycle saves time and money as it prevents spending larger sums fixing a defective software product after deployment. The Software Reliability Growth Model (SRGM can be used to predict the number of failures that may be encountered during the software testing process. In this paper we explore the advantages of the Grey Wolf Optimization (GWO algorithm in estimating the SRGM’s parameters with the objective of minimizing the difference between the estimated and the actual number of failures of the software system. We evaluated three different software reliability growth models: the Exponential Model (EXPM, the Power Model (POWM and the Delayed S-Shaped Model (DSSM. In addition, we used three different datasets to conduct an experimental study in order to show the effectiveness of our approach.
Nyman, R. [Swedish Nuclear Power Inspectorate, Stockholm (Sweden); Hegedus, D.; Tomic, B. [ENCONET Consulting GesmbH, Vienna (Austria); Lydell, B. [RSA Technologies, Vista, CA (United States)
1997-12-01
This report summarizes results and insights from the final phase of a R and D project on piping reliability sponsored by the Swedish Nuclear Power Inspectorate (SKI). The technical scope includes the development of an analysis framework for estimating piping reliability parameters from service data. The R and D has produced a large database on the operating experience with piping systems in commercial nuclear power plants worldwide. It covers the period 1970 to the present. The scope of the work emphasized pipe failures (i.e., flaws/cracks, leaks and ruptures) in light water reactors (LWRs). Pipe failures are rare events. A data reduction format was developed to ensure that homogenous data sets are prepared from scarce service data. This data reduction format distinguishes between reliability attributes and reliability influence factors. The quantitative results of the analysis of service data are in the form of conditional probabilities of pipe rupture given failures (flaws/cracks, leaks or ruptures) and frequencies of pipe failures. Finally, the R and D by SKI produced an analysis framework in support of practical applications of service data in PSA. This, multi-purpose framework, termed `PFCA`-Pipe Failure Cause and Attribute- defines minimum requirements on piping reliability analysis. The application of service data should reflect the requirements of an application. Together with raw data summaries, this analysis framework enables the development of a prior and a posterior pipe rupture probability distribution. The framework supports LOCA frequency estimation, steam line break frequency estimation, as well as the development of strategies for optimized in-service inspection strategies. 63 refs, 30 tabs, 22 figs.
Reliability Estimation of Parameters of Helical Wind Turbine with Vertical Axis.
Dumitrascu, Adela-Eliza; Lepadatescu, Badea; Dumitrascu, Dorin-Ion; Nedelcu, Anisor; Ciobanu, Doina Valentina
2015-01-01
Due to the prolonged use of wind turbines they must be characterized by high reliability. This can be achieved through a rigorous design, appropriate simulation and testing, and proper construction. The reliability prediction and analysis of these systems will lead to identifying the critical components, increasing the operating time, minimizing failure rate, and minimizing maintenance costs. To estimate the produced energy by the wind turbine, an evaluation approach based on the Monte Carlo simulation model is developed which enables us to estimate the probability of minimum and maximum parameters. In our simulation process we used triangular distributions. The analysis of simulation results has been focused on the interpretation of the relative frequency histograms and cumulative distribution curve (ogive diagram), which indicates the probability of obtaining the daily or annual energy output depending on wind speed. The experimental researches consist in estimation of the reliability and unreliability functions and hazard rate of the helical vertical axis wind turbine designed and patented to climatic conditions for Romanian regions. Also, the variation of power produced for different wind speeds, the Weibull distribution of wind probability, and the power generated were determined. The analysis of experimental results indicates that this type of wind turbine is efficient at low wind speed.
Reliability Estimation of Parameters of Helical Wind Turbine with Vertical Axis
Adela-Eliza Dumitrascu
2015-01-01
Full Text Available Due to the prolonged use of wind turbines they must be characterized by high reliability. This can be achieved through a rigorous design, appropriate simulation and testing, and proper construction. The reliability prediction and analysis of these systems will lead to identifying the critical components, increasing the operating time, minimizing failure rate, and minimizing maintenance costs. To estimate the produced energy by the wind turbine, an evaluation approach based on the Monte Carlo simulation model is developed which enables us to estimate the probability of minimum and maximum parameters. In our simulation process we used triangular distributions. The analysis of simulation results has been focused on the interpretation of the relative frequency histograms and cumulative distribution curve (ogive diagram, which indicates the probability of obtaining the daily or annual energy output depending on wind speed. The experimental researches consist in estimation of the reliability and unreliability functions and hazard rate of the helical vertical axis wind turbine designed and patented to climatic conditions for Romanian regions. Also, the variation of power produced for different wind speeds, the Weibull distribution of wind probability, and the power generated were determined. The analysis of experimental results indicates that this type of wind turbine is efficient at low wind speed.
2011-01-01
of optimisation techniques coupled with dynamic solution of the underlying model. Linear and nonlinear approaches to parameter estimation are investigated. There is also the application of maximum likelihood principles in the estimation of parameters, as well as the use of orthogonal collocation to generate a set......In this chapter the importance of parameter estimation in model development is illustrated through various applications related to reaction systems. In particular, rate constants in a reaction system are obtained through parameter estimation methods. These approaches often require the application...... of algebraic equations as the basis for parameter estimation.These approaches are illustrated using estimations of kinetic constants from reaction system models....
Sales-Cruz, Mauricio; Heitzig, Martina; Cameron, Ian;
2011-01-01
of optimisation techniques coupled with dynamic solution of the underlying model. Linear and nonlinear approaches to parameter estimation are investigated. There is also the application of maximum likelihood principles in the estimation of parameters, as well as the use of orthogonal collocation to generate a set...
2014-01-01
The purpose of this paper is to create an interval estimation of the fuzzy system reliability for the repairable multistate series–parallel system (RMSS). Two-sided fuzzy confidence interval for the fuzzy system reliability is constructed. The performance of fuzzy confidence interval is considered based on the coverage probability and the expected length. In order to obtain the fuzzy system reliability, the fuzzy sets theory is applied to the system reliability problem when dealing with uncertainties in the RMSS. The fuzzy number with a triangular membership function is used for constructing the fuzzy failure rate and the fuzzy repair rate in the fuzzy reliability for the RMSS. The result shows that the good interval estimator for the fuzzy confidence interval is the obtained coverage probabilities the expected confidence coefficient with the narrowest expected length. The model presented herein is an effective estimation method when the sample size is n ≥ 100. In addition, the optimal α-cut for the narrowest lower expected length and the narrowest upper expected length are considered. PMID:24987728
A Penalized Likelihood Approach to Parameter Estimation with Integral Reliability Constraints
Barry Smith
2015-06-01
Full Text Available Stress-strength reliability problems arise frequently in applied statistics and related fields. Often they involve two independent and possibly small samples of measurements on strength and breakdown pressures (stress. The goal of the researcher is to use the measurements to obtain inference on reliability, which is the probability that stress will exceed strength. This paper addresses the case where reliability is expressed in terms of an integral which has no closed form solution and where the number of observed values on stress and strength is small. We find that the Lagrange approach to estimating constrained likelihood, necessary for inference, often performs poorly. We introduce a penalized likelihood method and it appears to always work well. We use third order likelihood methods to partially offset the issue of small samples. The proposed method is applied to draw inferences on reliability in stress-strength problems with independent exponentiated exponential distributions. Simulation studies are carried out to assess the accuracy of the proposed method and to compare it with some standard asymptotic methods.
Bustos, Cesar; Sandeen, Ben; Chennakesavalu, Shriram; Littenberg, Tyson; Farr, Ben; Kalogera, Vassiliki
2016-01-01
Gravitational Waves (GWs) were predicted by Einstein's Theory of General Relativity as ripples in space-time that propagate outward from a source. Strong GW sources consist of compact binary systems such as Binary Neutron Stars (BNS) or Binary Black Holes (BBHs) that experience orbital shrinkage (inspiral) and eventual merger. Indirect evidence for the existence of GWs has been obtained through radio pulsar studies in BNS systems. A study of BBHs and other compact objects has limitations in the electromagnetic spectrum, therefore direct detections of GWs will open a new window into their nature. The effort targeting direct GWs detection is anchored on the development of a detector known as Advanced LIGO (Laser Interferometer Gravitational Wave Observation). Although detecting GW sources represents an anticipated breakthrough in physics, making GW astrophysics a reality critically relies on our ability to determine and measure the physical parameters associated with GW sources. We use Markov Chain Monte Carlo (MCMC) simulations on high-performance computing clusters for parameter estimation on high dimensional spaces (GW sources - 15 parameters). The quality of GW parameter estimation greatly depends on having the best possible knowledge of the expected waveform. Unfortunately, BBH GW production is very complex and our best waveforms are not valid across the full parameter space. With large-scale simulations we examine quantitatively the limitations of these waveforms in terms of extracting the astrophysical properties of BBH GW sources. We find that current waveforms are inadequate for BBH of unequal masses and demonstrate that improved waveforms are critically needed.
Reliable estimation of adsorption isotherm parameters using adequate pore size distribution
Husseinzadeh, Danial; Shahsavand, Akbar [Ferdowsi University of Mashhad, Mashhad (Iran, Islamic Republic of)
2015-05-15
The equilibrium adsorption isotherm has a crucial effect on various characteristics of the solid adsorbent (e.g., pore volume, bulk density, surface area, pore geometry). A historical paradox exists in conventional estimation of adsorption isotherm parameters. Traditionally, the total amount of adsorb material (total adsorption isotherm) has been considered equivalent to the local adsorption isotherm. This assumption is only valid when the corresponding pore size or energy distribution (PSD or ED) of the porous adsorbent can be successfully represented with the Dirac delta function. In practice, the actual PSD (or ED) is far from such assumption, and the traditional method for prediction of local adsorption isotherm parameters leads to serious errors. Up to now, the powerful combination of inverse theory and linear regularization technique has drastically failed when used for extraction of PSD from real adsorption data. For this reason, all previous researches used synthetic data because they were not able to extract proper PSD from the measured total adsorption isotherm with unrealistic parameters of local adsorption isotherm. We propose a novel approach that can successfully provide the correct values of local adsorption isotherm parameters without any a priori and unrealistic assumptions. Two distinct methods are suggested and several illustrative (synthetic and real experimental) examples are presented to clearly demonstrate the effectiveness of the newly proposed methods on computing the correct values of local adsorption isotherm parameters. The so-called Iterative and Optima methods' impressive performances on extraction of correct PSD are validated using several experimental data sets.
Estimating Cosmological Parameter Covariance
Taylor, Andy
2014-01-01
We investigate the bias and error in estimates of the cosmological parameter covariance matrix, due to sampling or modelling the data covariance matrix, for likelihood width and peak scatter estimators. We show that these estimators do not coincide unless the data covariance is exactly known. For sampled data covariances, with Gaussian distributed data and parameters, the parameter covariance matrix estimated from the width of the likelihood has a Wishart distribution, from which we derive the mean and covariance. This mean is biased and we propose an unbiased estimator of the parameter covariance matrix. Comparing our analytic results to a numerical Wishart sampler of the data covariance matrix we find excellent agreement. An accurate ansatz for the mean parameter covariance for the peak scatter estimator is found, and we fit its covariance to our numerical analysis. The mean is again biased and we propose an unbiased estimator for the peak parameter covariance. For sampled data covariances the width estimat...
Estimation of Bridge Reliability Distributions
Thoft-Christensen, Palle
In this paper it is shown how the so-called reliability distributions can be estimated using crude Monte Carlo simulation. The main purpose is to demonstrate the methodology. Therefor very exact data concerning reliability and deterioration are not needed. However, it is intended in the paper to ...
Reliability estimation using kriging metamodel
Cho, Tae Min; Ju, Byeong Hyeon; Lee, Byung Chai [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Jung, Do Hyun [Korea Automotive Technology Institute, Chonan (Korea, Republic of)
2006-08-15
In this study, the new method for reliability estimation is proposed using kriging metamodel. Kriging metamodel can be determined by appropriate sampling range and sampling numbers because there are no random errors in the Design and Analysis of Computer Experiments(DACE) model. The first kriging metamodel is made based on widely ranged sampling points. The Advanced First Order Reliability Method(AFORM) is applied to the first kriging metamodel to estimate the reliability approximately. Then, the second kriging metamodel is constructed using additional sampling points with updated sampling range. The Monte-Carlo Simulation(MCS) is applied to the second kriging metamodel to evaluate the reliability. The proposed method is applied to numerical examples and the results are almost equal to the reference reliability.
Optomechanical parameter estimation
Ang, Shan Zheng; Bowen, Warwick P; Tsang, Mankei
2013-01-01
We propose a statistical framework for the problem of parameter estimation from a noisy optomechanical system. The Cram\\'er-Rao lower bound on the estimation errors in the long-time limit is derived and compared with the errors of radiometer and expectation-maximization (EM) algorithms in the estimation of the force noise power. When applied to experimental data, the EM estimator is found to have the lowest error and follow the Cram\\'er-Rao bound most closely. With its ability to estimate most of the system parameters, the EM algorithm is envisioned to be useful for optomechanical sensing, atomic magnetometry, and classical or quantum system identification applications in general.
Parameter Estimation Through Ignorance
Du, Hailiang
2015-01-01
Dynamical modelling lies at the heart of our understanding of physical systems. Its role in science is deeper than mere operational forecasting, in that it allows us to evaluate the adequacy of the mathematical structure of our models. Despite the importance of model parameters, there is no general method of parameter estimation outside linear systems. A new relatively simple method of parameter estimation for nonlinear systems is presented, based on variations in the accuracy of probability forecasts. It is illustrated on the Logistic Map, the Henon Map and the 12-D Lorenz96 flow, and its ability to outperform linear least squares in these systems is explored at various noise levels and sampling rates. As expected, it is more effective when the forecast error distributions are non-Gaussian. The new method selects parameter values by minimizing a proper, local skill score for continuous probability forecasts as a function of the parameter values. This new approach is easier to implement in practice than alter...
Revisiting Cosmological parameter estimation
Prasad, Jayanti
2014-01-01
Constraining theoretical models with measuring the parameters of those from cosmic microwave background (CMB) anisotropy data is one of the most active areas in cosmology. WMAP, Planck and other recent experiments have shown that the six parameters standard $\\Lambda$CDM cosmological model still best fits the data. Bayesian methods based on Markov-Chain Monte Carlo (MCMC) sampling have been playing leading role in parameter estimation from CMB data. In one of the recent studies \\cite{2012PhRvD..85l3008P} we have shown that particle swarm optimization (PSO) which is a population based search procedure can also be effectively used to find the cosmological parameters which are best fit to the WMAP seven year data. In the present work we show that PSO not only can find the best-fit point, it can also sample the parameter space quite effectively, to the extent that we can use the same analysis pipeline to process PSO sampled points which is used to process the points sampled by Markov Chains, and get consistent res...
Calculation and Updating of Reliability Parameters in Probabilistic Safety Assessment
Zubair, Muhammad; Zhang, Zhijian; Khan, Salah Ud Din
2011-02-01
The internal events of nuclear power plant are complex and include equipment maintenance, equipment damage etc. These events will affect the probability of the current risk level of the system as well as the reliability of the equipment parameter values so such kind of events will serve as an important basis for systematic analysis and calculation. This paper presents a method for reliability parameters calculation and their updating. The method is based on binomial likelihood function and its conjugate beta distribution. For update parameters Bayes' theorem has been selected. To implement proposed method a computer base program is designed which provide help to estimate reliability parameters.
Parameter Estimation in Multivariate Gamma Distribution
V S Vaidyanathan; R Vani Lakshmi
2015-01-01
Multivariate gamma distribution finds abundant applications in stochastic modelling, hydrology and reliability. Parameter estimation in this distribution is a challenging one as it involves many parameters to be estimated simultaneously. In this paper, the form of multivariate gamma distribution proposed by Mathai and Moschopoulos [10] is considered. This form has nice properties in terms of marginal and conditional densities. A new method of estimation based on optimal search is proposed for...
Itagaki, H. [Yokohama National University, Yokohama (Japan). Faculty of Engineering; Asada, H.; Ito, S. [National Aerospace Laboratory, Tokyo (Japan); Shinozuka, M.
1996-12-31
Risk assessed structural positions in a pressurized fuselage of a transport-type aircraft applied with damage tolerance design are taken up as the subject of discussion. A small number of data obtained from inspections on the positions was used to discuss the Bayesian reliability analysis that can estimate also a proper non-periodic inspection schedule, while estimating proper values for uncertain factors. As a result, time period of generating fatigue cracks was determined according to procedure of detailed visual inspections. The analysis method was found capable of estimating values that are thought reasonable and the proper inspection schedule using these values, in spite of placing the fatigue crack progress expression in a very simple form and estimating both factors as the uncertain factors. Thus, the present analysis method was verified of its effectiveness. This study has discussed at the same time the structural positions, modeling of fatigue cracks generated and develop in the positions, conditions for destruction, damage factors, and capability of the inspection from different viewpoints. This reliability analysis method is thought effective also on such other structures as offshore structures. 18 refs., 8 figs., 1 tab.
Parameter Estimation in Multivariate Gamma Distribution
V S Vaidyanathan
2015-05-01
Full Text Available Multivariate gamma distribution finds abundant applications in stochastic modelling, hydrology and reliability. Parameter estimation in this distribution is a challenging one as it involves many parameters to be estimated simultaneously. In this paper, the form of multivariate gamma distribution proposed by Mathai and Moschopoulos [10] is considered. This form has nice properties in terms of marginal and conditional densities. A new method of estimation based on optimal search is proposed for estimating the parameters using the marginal distributions and the concepts of maximum likelihood, spacings and least squares. The proposed methodology is easy to implement and is free from calculus. It optimizes the objective function by searching over a wide range of values and determines the estimate of the parameters. The consistency of the estimates is demonstrated in terms of mean, standard deviation and mean square error through simulation studies for different choices of parameters.
Reliability Estimates for Power Supplies
Lee C. Cadwallader; Peter I. Petersen
2005-09-01
Failure rates for large power supplies at a fusion facility are critical knowledge needed to estimate availability of the facility or to set priorties for repairs and spare components. A study of the "failure to operate on demand" and "failure to continue to operate" failure rates has been performed for the large power supplies at DIII-D, which provide power to the magnet coils, the neutral beam injectors, the electron cyclotron heating systems, and the fast wave systems. When one of the power supplies fails to operate, the research program has to be either temporarily changed or halted. If one of the power supplies for the toroidal or ohmic heating coils fails, the operations have to be suspended or the research is continued at de-rated parameters until a repair is completed. If one of the power supplies used in the auxiliary plasma heating systems fails the research is often temporarily changed until a repair is completed. The power supplies are operated remotely and repairs are only performed when the power supplies are off line, so that failure of a power supply does not cause any risk to personnel. The DIII-D Trouble Report database was used to determine the number of power supply faults (over 1,700 reports), and tokamak annual operations data supplied the number of shots, operating times, and power supply usage for the DIII-D operating campaigns between mid-1987 and 2004. Where possible, these power supply failure rates from DIII-D will be compared to similar work that has been performed for the Joint European Torus equipment. These independent data sets support validation of the fusion-specific failure rate values.
ESTIMATION ACCURACY OF EXPONENTIAL DISTRIBUTION PARAMETERS
muhammad zahid rashid
2011-04-01
Full Text Available The exponential distribution is commonly used to model the behavior of units that have a constant failure rate. The two-parameter exponential distribution provides a simple but nevertheless useful model for the analysis of lifetimes, especially when investigating reliability of technical equipment.This paper is concerned with estimation of parameters of the two parameter (location and scale exponential distribution. We used the least squares method (LSM, relative least squares method (RELS, ridge regression method (RR, moment estimators (ME, modified moment estimators (MME, maximum likelihood estimators (MLE and modified maximum likelihood estimators (MMLE. We used the mean square error MSE, and total deviation TD, as measurement for the comparison between these methods. We determined the best method for estimation using different values for the parameters and different sample sizes
ESTIMATION ACCURACY OF EXPONENTIAL DISTRIBUTION PARAMETERS
muhammad zahid rashid
2011-04-01
Full Text Available The exponential distribution is commonly used to model the behavior of units that have a constant failure rate. The two-parameter exponential distribution provides a simple but nevertheless useful model for the analysis of lifetimes, especially when investigating reliability of technical equipment.This paper is concerned with estimation of parameters of the two parameter (location and scale exponential distribution. We used the least squares method (LSM, relative least squares method (RELS, ridge regression method (RR, moment estimators (ME, modified moment estimators (MME, maximum likelihood estimators (MLE and modified maximum likelihood estimators (MMLE. We used the mean square error MSE, and total deviation TD, as measurement for the comparison between these methods. We determined the best method for estimation using different values for the parameters and different sample sizes
State and parameter estimation in bio processes
Maher, M.; Roux, G.; Dahhou, B. [Centre National de la Recherche Scientifique (CNRS), 31 - Toulouse (France)]|[Institut National des Sciences Appliquees (INSA), 31 - Toulouse (France)
1994-12-31
A major difficulty in monitoring and control of bio-processes is the lack of reliable and simple sensors for following the evolution of the main state variables and parameters such as biomass, substrate, product, growth rate, etc... In this article, an adaptive estimation algorithm is proposed to recover the state and parameters in bio-processes. This estimator utilizes the physical process model and the reference model approach. Experimentations concerning estimation of biomass and product concentrations and specific growth rate, during batch, fed-batch and continuous fermentation processes are presented. The results show the performance of this adaptive estimation approach. (authors) 12 refs.
Bayesian Missile System Reliability from Point Estimates
2014-10-28
OCT 2014 2. REPORT TYPE N/A 3. DATES COVERED - 4. TITLE AND SUBTITLE Bayesian Missile System Reliability from Point Estimates 5a. CONTRACT...Principle (MEP) to convert point estimates to probability distributions to be used as priors for Bayesian reliability analysis of missile data, and...illustrate this approach by applying the priors to a Bayesian reliability model of a missile system. 15. SUBJECT TERMS priors, Bayesian , missile
Aswath Damodaran
1999-01-01
Over the last three decades, the capital asset pricing model has occupied a central and often controversial place in most corporate finance analysts’ tool chests. The model requires three inputs to compute expected returns – a riskfree rate, a beta for an asset and an expected risk premium for the market portfolio (over and above the riskfree rate). Betas are estimated, by most practitioners, by regressing returns on an asset against a stock index, with the slope of the regression being the b...
PARAMETER ESTIMATION OF EXPONENTIAL DISTRIBUTION
XU Haiyan; FEI Heliang
2005-01-01
Because of the importance of grouped data, many scholars have been devoted to the study of this kind of data. But, few documents have been concerned with the threshold parameter. In this paper, we assume that the threshold parameter is smaller than the first observing point. Then, on the basis of the two-parameter exponential distribution, the maximum likelihood estimations of both parameters are given, the sufficient and necessary conditions for their existence and uniqueness are argued, and the asymptotic properties of the estimations are also presented, according to which approximate confidence intervals of the parameters are derived. At the same time, the estimation of the parameters is generalized, and some methods are introduced to get explicit expressions of these generalized estimations. Also, a special case where the first failure time of the units is observed is considered.
Parameter estimation in food science.
Dolan, Kirk D; Mishra, Dharmendra K
2013-01-01
Modeling includes two distinct parts, the forward problem and the inverse problem. The forward problem-computing y(t) given known parameters-has received much attention, especially with the explosion of commercial simulation software. What is rarely made clear is that the forward results can be no better than the accuracy of the parameters. Therefore, the inverse problem-estimation of parameters given measured y(t)-is at least as important as the forward problem. However, in the food science literature there has been little attention paid to the accuracy of parameters. The purpose of this article is to summarize the state of the art of parameter estimation in food science, to review some of the common food science models used for parameter estimation (for microbial inactivation and growth, thermal properties, and kinetics), and to suggest a generic method to standardize parameter estimation, thereby making research results more useful. Scaled sensitivity coefficients are introduced and shown to be important in parameter identifiability. Sequential estimation and optimal experimental design are also reviewed as powerful parameter estimation methods that are beginning to be used in the food science literature.
Jasbir Arora
2016-06-01
Full Text Available The indestructible nature of teeth against most of the environmental abuses makes its use in disaster victim identification (DVI. The present study has been undertaken to examine the reliability of Gustafson’s qualitative method and Kedici’s quantitative method of measuring secondary dentine for age estimation among North Western adult Indians. 196 (M = 85; F = 111 single rooted teeth were collected from the Department of Oral Health Sciences, PGIMER, Chandigarh. Ground sections were prepared and the amount of secondary dentine formed was scored qualitatively according to Gustafson’s (0–3 scoring system (method 1 and quantitatively following Kedici’s micrometric measurement method (method 2. Out of 196 teeth 180 samples (M = 80; F = 100 were found to be suitable for measuring secondary dentine following Kedici’s method. Absolute mean error of age was calculated by both methodologies. Results clearly showed that in pooled data, method 1 gave an error of ±10.4 years whereas method 2 exhibited an error of approximately ±13 years. A statistically significant difference was noted in absolute mean error of age between two methods of measuring secondary dentine for age estimation. Further, it was also revealed that teeth extracted for periodontal reasons severely decreased the accuracy of Kedici’s method however, the disease had no effect while estimating age by Gustafson’s method. No significant gender differences were noted in the absolute mean error of age by both methods which suggest that there is no need to separate data on the basis of gender.
Parameters estimation in quantum optics
D'Ariano, G M; Sacchi, M F; Paris, Matteo G. A.; Sacchi, Massimiliano F.
2000-01-01
We address several estimation problems in quantum optics by means of the maximum-likelihood principle. We consider Gaussian state estimation and the determination of the coupling parameters of quadratic Hamiltonians. Moreover, we analyze different schemes of phase-shift estimation. Finally, the absolute estimation of the quantum efficiency of both linear and avalanche photodetectors is studied. In all the considered applications, the Gaussian bound on statistical errors is attained with a few thousand data.
A new simulation estimator of system reliability
Sheldon M. Ross
1994-01-01
Full Text Available A basic identity is proven and applied to obtain new simulation estimators concerning (a system reliability, (b a multi-valued system. We show that the variance of this new estimator is often of the order α2 when the usual raw estimator has variance of the order α and α is small. We also indicate how this estimator can be combined with standard variance reduction techniques of antithetic variables, stratified sampling and importance sampling.
Mission Reliability Estimation for Repairable Robot Teams
Stephen B. Stancliff
2008-11-01
Full Text Available Many of the most promising applications for mobile robots require very high reliability. The current generation of mobile robots is, for the most part, highly unreliable. The few mobile robots that currently demonstrate high reliability achieve this reliability at a high financial cost. In order for mobile robots to be more widely used, it will be necessary to find ways to provide high mission reliability at lower cost. Comparing alternative design paradigms in a principled way requires methods for comparing the reliability of different robot and robot team configurations. In this paper, we present the first principled quantitative method for performing mission reliability estimation for mobile robot teams. We also apply this method to an example robot mission, examining the cost-reliability tradeoffs among different team configurations. Using conservative estimates of the cost-reliability relationship, our results show that it is possible to significantly reduce the cost of a robotic mission by using cheaper, lower-reliability components and providing spares.
Weibull Parameters Estimation Based on Physics of Failure Model
Kostandyan, Erik; Sørensen, John Dalsgaard
2012-01-01
Reliability estimation procedures are discussed for the example of fatigue development in solder joints using a physics of failure model. The accumulated damage is estimated based on a physics of failure model, the Rainflow counting algorithm and the Miner’s rule. A threshold model is used...... distribution. Methods from structural reliability analysis are used to model the uncertainties and to assess the reliability for fatigue failure. Maximum Likelihood and Least Square estimation techniques are used to estimate fatigue life distribution parameters....
The reliability of DSM impact estimates
Vine, E.L. [Lawrence Berkeley Lab., CA (United States); Kushler, M.G. [Michigan Public Service Commission, Lansing, MI (United States)
1995-05-01
Demand-side management (DSM) critics continue to question the reliability of DSM program savings, and therefore, the need for funding such programs. In this paper, the authors examine the issues underlying the discussion of reliability of DSM program savings (e.g., bias and precision) and compare the levels of precision of DSM impact estimates for three utilities. Overall, the precision results from all three companies appear quite similar and, for the most part, demonstrate reasonably good precision levels around DSM savings estimate. The conclude by recommending activities for program managers and evaluators for increasing the understanding of the factors leading to DSM uncertainty and for reducing the level of DSM uncertainty.
Data Handling and Parameter Estimation
Sin, Gürkan; Gernaey, Krist
2016-01-01
literature that are mostly based on the ActivatedSludge Model (ASM) framework and their appropriate extensions (Henze et al., 2000).The chapter presents an overview of the most commonly used methods in the estimation of parameters from experimental batch data, namely: (i) data handling and validation, (ii...
Adaptive Response Surface Techniques in Reliability Estimation
Enevoldsen, I.; Faber, M. H.; Sørensen, John Dalsgaard
1993-01-01
Problems in connection with estimation of the reliability of a component modelled by a limit state function including noise or first order discontinuitics are considered. A gradient free adaptive response surface algorithm is developed. The algorithm applies second order polynomial surfaces deter...
Parameter estimation and inverse problems
Aster, Richard C; Thurber, Clifford H
2005-01-01
Parameter Estimation and Inverse Problems primarily serves as a textbook for advanced undergraduate and introductory graduate courses. Class notes have been developed and reside on the World Wide Web for faciliting use and feedback by teaching colleagues. The authors'' treatment promotes an understanding of fundamental and practical issus associated with parameter fitting and inverse problems including basic theory of inverse problems, statistical issues, computational issues, and an understanding of how to analyze the success and limitations of solutions to these probles. The text is also a practical resource for general students and professional researchers, where techniques and concepts can be readily picked up on a chapter-by-chapter basis.Parameter Estimation and Inverse Problems is structured around a course at New Mexico Tech and is designed to be accessible to typical graduate students in the physical sciences who may not have an extensive mathematical background. It is accompanied by a Web site that...
Parameter Estimation Using VLA Data
Venter, Willem C.
The main objective of this dissertation is to extract parameters from multiple wavelength images, on a pixel-to-pixel basis, when the images are corrupted with noise and a point spread function. The data used are from the field of radio astronomy. The very large array (VLA) at Socorro in New Mexico was used to observe planetary nebula NGC 7027 at three different wavelengths, 2 cm, 6 cm and 20 cm. A temperature model, describing the temperature variation in the nebula as a function of optical depth, is postulated. Mathematical expressions for the brightness distribution (flux density) of the nebula, at the three observed wavelengths, are obtained. Using these three equations and the three data values available, one from the observed flux density map at each wavelength, it is possible to solve for two temperature parameters and one optical depth parameter at each pixel location. Due to the fact that the number of unknowns equal the number of equations available, estimation theory cannot be used to smooth any noise present in the data values. It was found that a direct solution of the three highly nonlinear flux density equations is very sensitive to noise in the data. Results obtained from solving for the three unknown parameters directly, as discussed above, were not physical realizable. This was partly due to the effect of incomplete sampling at the time when the data were gathered and to noise in the system. The application of rigorous digital parameter estimation techniques result in estimated parameters that are also not physically realizable. The estimated values for the temperature parameters are for example either too high or negative, which is not physically possible. Simulation studies have shown that a "double smoothing" technique improves the results by a large margin. This technique consists of two parts: in the first part the original observed data are smoothed using a running window and in the second part a similar smoothing of the estimated parameters
Sequential Bayesian technique: An alternative approach for software reliability estimation
S Chatterjee; S S Alam; R B Misra
2009-04-01
This paper proposes a sequential Bayesian approach similar to Kalman ﬁlter for estimating reliability growth or decay of software. The main advantage of proposed method is that it shows the variation of the parameter over a time, as new failure data become available. The usefulness of the method is demonstrated with some real life data
Load Estimation from Modal Parameters
Aenlle, Manuel López; Brincker, Rune; Fernández, Pelayo Fernández;
2007-01-01
In Natural Input Modal Analysis the modal parameters are estimated just from the responses while the loading is not recorded. However, engineers are sometimes interested in knowing some features of the loading acting on a structure. In this paper, a procedure to determine the loading from a FRF...... matrix assembled from modal parameters and the experimental responses recorded using standard sensors, is presented. The method implies the inversion of the FRF which, in general, is not full rank matrix due to the truncation of the modal space. Furthermore, some ecommendations are included to improve...
Multi-Parameter Estimation for Orthorhombic Media
Masmoudi, Nabil
2015-08-19
Building reliable anisotropy models is crucial in seismic modeling, imaging and full waveform inversion. However, estimating anisotropy parameters is often hampered by the trade off between inhomogeneity and anisotropy. For instance, one way to estimate the anisotropy parameters is to relate them analytically to traveltimes, which is challenging in inhomogeneous media. Using perturbation theory, we develop travel-time approximations for orthorhombic media as explicit functions of the anellipticity parameters η1, η2 and a parameter Δγ in inhomogeneous background media. Specifically, our expansion assumes inhomogeneous ellipsoidal anisotropic background model, which can be obtained from well information and stacking velocity analysis. This approach has two main advantages: in one hand, it provides a computationally efficient tool to solve the orthorhombic eikonal equation, on the other hand, it provides a mechanism to scan for the best fitting anisotropy parameters without the need for repetitive modeling of traveltimes, because the coefficients of the traveltime expansion are independent of the perturbed parameters. Furthermore, the coefficients of the traveltime expansion provide insights on the sensitivity of the traveltime with respect to the perturbed parameters. We show the accuracy of the traveltime approximations as well as an approach for multi-parameter scanning in orthorhombic media.
Mode choice model parameters estimation
Strnad, Irena
2010-01-01
The present work focuses on parameter estimation of two mode choice models: multinomial logit and EVA 2 model, where four different modes and five different trip purposes are taken into account. Mode choice model discusses the behavioral aspect of mode choice making and enables its application to a traffic model. Mode choice model includes mode choice affecting trip factors by using each mode and their relative importance to choice made. When trip factor values are known, it...
Inflation and cosmological parameter estimation
Hamann, J.
2007-05-15
In this work, we focus on two aspects of cosmological data analysis: inference of parameter values and the search for new effects in the inflationary sector. Constraints on cosmological parameters are commonly derived under the assumption of a minimal model. We point out that this procedure systematically underestimates errors and possibly biases estimates, due to overly restrictive assumptions. In a more conservative approach, we analyse cosmological data using a more general eleven-parameter model. We find that regions of the parameter space that were previously thought ruled out are still compatible with the data; the bounds on individual parameters are relaxed by up to a factor of two, compared to the results for the minimal six-parameter model. Moreover, we analyse a class of inflation models, in which the slow roll conditions are briefly violated, due to a step in the potential. We show that the presence of a step generically leads to an oscillating spectrum and perform a fit to CMB and galaxy clustering data. We do not find conclusive evidence for a step in the potential and derive strong bounds on quantities that parameterise the step. (orig.)
Multiple Parameter Estimation With Quantized Channel Output
Mezghani, Amine; Nossek, Josef A
2010-01-01
We present a general problem formulation for optimal parameter estimation based on quantized observations, with application to antenna array communication and processing (channel estimation, time-of-arrival (TOA) and direction-of-arrival (DOA) estimation). The work is of interest in the case when low resolution A/D-converters (ADCs) have to be used to enable higher sampling rate and to simplify the hardware. An Expectation-Maximization (EM) based algorithm is proposed for solving this problem in a general setting. Besides, we derive the Cramer-Rao Bound (CRB) and discuss the effects of quantization and the optimal choice of the ADC characteristic. Numerical and analytical analysis reveals that reliable estimation may still be possible even when the quantization is very coarse.
Reliability estimates for flawed mortar projectile bodies
Cordes, J.A. [US Army ARDEC, AMSRD-AAR-MEF-E, Analysis and Evaluation Division, Fuze and Precision Armaments Technology Directorate, US Army Armament Research Development and Engineering Center, Picatinny Arsenal, NJ 07806-5000 (United States)], E-mail: jennifer.cordes@us.army.mil; Thomas, J.; Wong, R.S.; Carlucci, D. [US Army ARDEC, AMSRD-AAR-MEF-E, Analysis and Evaluation Division, Fuze and Precision Armaments Technology Directorate, US Army Armament Research Development and Engineering Center, Picatinny Arsenal, NJ 07806-5000 (United States)
2009-12-15
The Army routinely screens mortar projectiles for defects in safety-critical parts. In 2003, several lots of mortar projectiles had a relatively high defect rate, 0.24%. Before releasing the projectiles, the Army reevaluated the chance of a safety-critical failure. Limit state functions and Monte Carlo simulations were used to estimate reliability. Measured distributions of wall thickness, defect rate, material strength, and applied loads were used with calculated stresses to estimate the probability of failure. The results predicted less than one failure in one million firings. As of 2008, the mortar projectiles have been used without any safety-critical incident.
Estimation of Physical Parameters in Linear and Nonlinear Dynamic Systems
Knudsen, Morten
and estimation of physical parameters in particular. 2. To apply the new methods for modelling of specific objects, such as loudspeakers, ac- and dc-motors wind turbines and beat exchangers. A reliable quality measure of an obtained parameter estimate is a prerequisite for any reasonable use of the result...
SA BASED SOFTWARE DEPLOYMENT RELIABILITY ESTIMATION CONSIDERING COMPONENT DEPENDENCE
Su Xihong; Liu Hongwei; Wu Zhibo; Yang Xiaozong; Zuo Decheng
2011-01-01
Reliability is one of the most critical properties of software system.System deployment architecture is the allocation of system software components on host nodes.Software Architecture (SA)based software deployment models help to analyze reliability of different deployments.Though many approaches for architecture-based reliability estimation exist,little work has incorporated the influence of system deployment and hardware resources into reliability estimation.There are many factors influencing system deployment.By translating the multi-dimension factors into degree matrix of component dependence,we provide the definition of component dependence and propose a method of calculating system reliability of deployments.Additionally,the parameters that influence the optimal deployment may change during system execution.The existing software deployment architecture may be ill-suited for the given environment,and the system needs to be redeployed to improve reliability.An approximate algorithm,A*_D,to increase system reliability is presented.When the number of components and host nodes is relative large,experimental results show that this algorithm can obtain better deployment than stochastic and greedy algorithms.
Trends in Control Area of PLC Reliability and Safety Parameters
Juraj Zdansky
2008-01-01
Full Text Available Extension of the PLC application possibilities is closely related to increase of reliability and safety parameters. If the requirement of reliability and safety parameters will be suitable, the PLC could by implemented to specific applications such the safety-related processes control. The goal of this article is to show the way which producers are approaching to increase PLC`s reliability and safety parameters. The second goal is to analyze these parameters for range of present choice and describe the possibility how the reliability and safety parameters can be affected.
Applied parameter estimation for chemical engineers
Englezos, Peter
2000-01-01
Formulation of the parameter estimation problem; computation of parameters in linear models-linear regression; Gauss-Newton method for algebraic models; other nonlinear regression methods for algebraic models; Gauss-Newton method for ordinary differential equation (ODE) models; shortcut estimation methods for ODE models; practical guidelines for algorithm implementation; constrained parameter estimation; Gauss-Newton method for partial differential equation (PDE) models; statistical inferences; design of experiments; recursive parameter estimation; parameter estimation in nonlinear thermodynam
Estimating a municipal water supply reliability
O.G. Okeola
2015-12-01
Full Text Available The availability and adequacy of water in a river basin determine the design of water resources projects such as water supply. There is a further need to regularly appraise availability of such resource for municipality at a distant future to help in articulating contingent plan to handle its vulnerability. This paper attempts to empirically determine the reliability of water resource for a municipal water supply. An approach was first developed to estimate municipality water demand that lack socioeconometric data using a purpose-specific model. Hydrological assessment of river Oyun basin was carried out using Markov model and sequent peak analysis to determine the reliability extent for the future demand need. The two models were then applied to Offa municipality in Kwara state, Nigeria. The finding revealed the reliability and adequacy of the resource up till year 2020. The need to start exploring a well-coordinated conjunctive use of resources is recommended. The study can serve as an organized baseline for future work that will consider physiographic characteristics of the basin and climatic dynamics. The findings can be a vital input into the demand management process for long-term sustainable water supply of the town and by extension to urban township with similar characteristic.
DISTRIBUTED MONITORING SYSTEM RELIABILITY ESTIMATION WITH CONSIDERATION OF STATISTICAL UNCERTAINTY
Yi Pengxing; Yang Shuzi; Du Runsheng; Wu Bo; Liu Shiyuan
2005-01-01
Taking into account the whole system structure and the component reliability estimation uncertainty, a system reliability estimation method based on probability and statistical theory for distributed monitoring systems is presented. The variance and confidence intervals of the system reliability estimation are obtained by expressing system reliability as a linear sum of products of higher order moments of component reliability estimates when the number of component or system survivals obeys binomial distribution. The eigenfunction of binomial distribution is used to determine the moments of component reliability estimates, and a symbolic matrix which can facilitate the search of explicit system reliability estimates is proposed. Furthermore, a case of application is used to illustrate the procedure, and with the help of this example, various issues such as the applicability of this estimation model, and measures to improve system reliability of monitoring systems are discussed.
Application of chaotic theory to parameter estimation
无
2002-01-01
High precision parameter estimation is very important for control system design and compensation. This paper utilizes the properties of chaotic system for parameter estimation. Theoretical analysis and experimental results indicated that this method has extremely high sensitivity and resolving power. The most important contribution of this paper is apart from the traditional engineering viewpoint and actualizing parameter estimation just based on unstable chaotic systems.
Lower bounds to the reliabilities of factor score estimators
Hessen, D.J.|info:eu-repo/dai/nl/256041717
2017-01-01
Under the general common factor model, the reliabilities of factor score estimators might be of more interest than the reliability of the total score (the unweighted sum of item scores). In this paper, lower bounds to the reliabilities of Thurstone’s factor score estimators, Bartlett’s factor score
Parameter Estimation in Continuous Time Domain
Gabriela M. ATANASIU
2016-12-01
Full Text Available This paper will aim to presents the applications of a continuous-time parameter estimation method for estimating structural parameters of a real bridge structure. For the purpose of illustrating this method two case studies of a bridge pile located in a highly seismic risk area are considered, for which the structural parameters for the mass, damping and stiffness are estimated. The estimation process is followed by the validation of the analytical results and comparison with them to the measurement data. Further benefits and applications for the continuous-time parameter estimation method in civil engineering are presented in the final part of this paper.
PARAMETER ESTIMATION OF ENGINEERING TURBULENCE MODEL
钱炜祺; 蔡金狮
2001-01-01
A parameter estimation algorithm is introduced and used to determine the parameters in the standard k-ε two equation turbulence model (SKE). It can be found from the estimation results that although the parameter estimation method is an effective method to determine model parameters, it is difficult to obtain a set of parameters for SKE to suit all kinds of separated flow and a modification of the turbulence model structure should be considered. So, a new nonlinear k-ε two-equation model (NNKE) is put forward in this paper and the corresponding parameter estimation technique is applied to determine the model parameters. By implementing the NNKE to solve some engineering turbulent flows, it is shown that NNKE is more accurate and versatile than SKE. Thus, the success of NNKE implies that the parameter estimation technique may have a bright prospect in engineering turbulence model research.
JUPITER PROJECT - JOINT UNIVERSAL PARAMETER IDENTIFICATION AND EVALUATION OF RELIABILITY
The JUPITER (Joint Universal Parameter IdenTification and Evaluation of Reliability) project builds on the technology of two widely used codes for sensitivity analysis, data assessment, calibration, and uncertainty analysis of environmental models: PEST and UCODE.
Estimation of the Reliability of Distributed Applications
Marian Pompiliu CRISTESCU; Laurentiu CIOVICA
2010-01-01
In this paper the reliability is presented as an important feature for use in mission-critical distributed applications. Certain aspects of distributed systems make the requested level of reliability more difficult. An obvious benefit of distributed systems is that they serve the global business and social environment in which we live and work. Another benefit is that they can improve the quality of services, in terms of reliability, availability and performance, for the complex systems. The ...
NEW DOCTORAL DEGREE Parameter estimation problem in the Weibull model
Marković, Darija
2009-01-01
In this dissertation we consider the problem of the existence of best parameters in the Weibull model, one of the most widely used statistical models in reliability theory and life data theory. Particular attention is given to a 3-parameter Weibull model. We have listed some of the many applications of this model. We have described some of the classical methods for estimating parameters of the Weibull model, two graphical methods (Weibull probability plot and hazard plot), and two analyt...
Parameter Estimation and Experimental Design in Groundwater Modeling
SUN Ne-zheng
2004-01-01
This paper reviews the latest developments on parameter estimation and experimental design in the field of groundwater modeling. Special considerations are given when the structure of the identified parameter is complex and unknown. A new methodology for constructing useful groundwater models is described, which is based on the quantitative relationships among the complexity of model structure, the identifiability of parameter, the sufficiency of data, and the reliability of model application.
A Note on Structural Equation Modeling Estimates of Reliability
Yang, Yanyun; Green, Samuel B.
2010-01-01
Reliability can be estimated using structural equation modeling (SEM). Two potential problems with this approach are that estimates may be unstable with small sample sizes and biased with misspecified models. A Monte Carlo study was conducted to investigate the quality of SEM estimates of reliability by themselves and relative to coefficient…
Earth Rotation Parameter Estimation by GPS Observations
YAO Yibin
2006-01-01
The methods of Earth rotation parameter (ERP) estimation based on IGS SINEX file of GPS solution are discussed in detail. There are two different ways to estimate ERP: one is the parameter transformation method, and the other is direct adjustment method with restrictive conditions. By comparing the estimated results with independent copyright program to IERS results, the residual systemic error can be found in estimated ERP with GPS observations.
Hardware and software reliability estimation using simulations
Swern, Frederic L.
1994-01-01
The simulation technique is used to explore the validation of both hardware and software. It was concluded that simulation is a viable means for validating both hardware and software and associating a reliability number with each. This is useful in determining the overall probability of system failure of an embedded processor unit, and improving both the code and the hardware where necessary to meet reliability requirements. The methodologies were proved using some simple programs, and simple hardware models.
Estimation of physical parameters in induction motors
Børsting, H.; Knudsen, Morten; Rasmussen, Henrik
1994-01-01
Parameter estimation in induction motors is a field of great interest, because accurate models are needed for robust dynamic control of induction motors......Parameter estimation in induction motors is a field of great interest, because accurate models are needed for robust dynamic control of induction motors...
On parameter estimation in deformable models
Fisker, Rune; Carstensen, Jens Michael
1998-01-01
Deformable templates have been intensively studied in image analysis through the last decade, but despite its significance the estimation of model parameters has received little attention. We present a method for supervised and unsupervised model parameter estimation using a general Bayesian...... method is based on a modified version of the EM algorithm. Experimental results for a deformable template used for textile inspection are presented...
Simulator for Software Project Reliability Estimation
Sanjana,
2011-07-01
Full Text Available Several models are there for software development processes, each describing approaches to a variety of tasks or activities that take place during the process. Without project management, softwareprojects can easily be delivered late or over budget. With large numbers of software projects not meeting their expectations in terms of functionality, cost, or delivery schedule, effective project management appears to be lacking.IEEE defines reliability as “the ability of a system to perform its required function under stated conditions for a specified period of time. To most software project managers, reliability is equated to correctness that is number of bugs found and fixed. The purpose is to develop a simulator forestimating the reliability of the software project using PERT approach keeping in view the criticality index of each task.
MEASUREMENT: ACCOUNTING FOR RELIABILITY IN PERFORMANCE ESTIMATES.
Waterman, Brian; Sutter, Robert; Burroughs, Thomas; Dunagan, W Claiborne
2014-01-01
When evaluating physician performance measures, physician leaders are faced with the quandary of determining whether departures from expected physician performance measurements represent a true signal or random error. This uncertainty impedes the physician leader's ability and confidence to take appropriate performance improvement actions based on physician performance measurements. Incorporating reliability adjustment into physician performance measurement is a valuable way of reducing the impact of random error in the measurements, such as those caused by small sample sizes. Consequently, the physician executive has more confidence that the results represent true performance and is positioned to make better physician performance improvement decisions. Applying reliability adjustment to physician-level performance data is relatively new. As others have noted previously, it's important to keep in mind that reliability adjustment adds significant complexity to the production, interpretation and utilization of results. Furthermore, the methods explored in this case study only scratch the surface of the range of available Bayesian methods that can be used for reliability adjustment; further study is needed to test and compare these methods in practice and to examine important extensions for handling specialty-specific concerns (e.g., average case volumes, which have been shown to be important in cardiac surgery outcomes). Moreover, it's important to note that the provider group average as a basis for shrinkage is one of several possible choices that could be employed in practice and deserves further exploration in future research. With these caveats, our results demonstrate that incorporating reliability adjustment into physician performance measurements is feasible and can notably reduce the incidence of "real" signals relative to what one would expect to see using more traditional approaches. A physician leader who is interested in catalyzing performance improvement
Parameter Estimation, Model Reduction and Quantum Filtering
Chase, Bradley A
2009-01-01
This dissertation explores the topics of parameter estimation and model reduction in the context of quantum filtering. Chapters 2 and 3 provide a review of classical and quantum probability theory, stochastic calculus and filtering. Chapter 4 studies the problem of quantum parameter estimation and introduces the quantum particle filter as a practical computational method for parameter estimation via continuous measurement. Chapter 5 applies these techniques in magnetometry and studies the estimator's uncertainty scalings in a double-pass atomic magnetometer. Chapter 6 presents an efficient feedback controller for continuous-time quantum error correction. Chapter 7 presents an exact model of symmetric processes of collective qubit systems.
Reliability estimation in a multilevel confirmatory factor analysis framework.
Geldhof, G John; Preacher, Kristopher J; Zyphur, Michael J
2014-03-01
Scales with varying degrees of measurement reliability are often used in the context of multistage sampling, where variance exists at multiple levels of analysis (e.g., individual and group). Because methodological guidance on assessing and reporting reliability at multiple levels of analysis is currently lacking, we discuss the importance of examining level-specific reliability. We present a simulation study and an applied example showing different methods for estimating multilevel reliability using multilevel confirmatory factor analysis and provide supporting Mplus program code. We conclude that (a) single-level estimates will not reflect a scale's actual reliability unless reliability is identical at each level of analysis, (b) 2-level alpha and composite reliability (omega) perform relatively well in most settings, (c) estimates of maximal reliability (H) were more biased when estimated using multilevel data than either alpha or omega, and (d) small cluster size can lead to overestimates of reliability at the between level of analysis. We also show that Monte Carlo confidence intervals and Bayesian credible intervals closely reflect the sampling distribution of reliability estimates under most conditions. We discuss the estimation of credible intervals using Mplus and provide R code for computing Monte Carlo confidence intervals.
Estimating parametes for systems with complicated dynamics
Goodwin, J; Goodwin, Justin; Brown, Reggie
1998-01-01
Changes in parameters of a physical device can eventually give way to catastrophic failure. In this paper we present a method for estimating the parameters of a device from time series data. We also examine the robustness of this method to noise in the data. For our examples, the parameter estimates are good to about two decimal places even at 0 dB signal to noise ratio.
Cosmological parameter estimation using Particle Swarm Optimization
Prasad, J.; Souradeep, T.
2014-03-01
Constraining parameters of a theoretical model from observational data is an important exercise in cosmology. There are many theoretically motivated models, which demand greater number of cosmological parameters than the standard model of cosmology uses, and make the problem of parameter estimation challenging. It is a common practice to employ Bayesian formalism for parameter estimation for which, in general, likelihood surface is probed. For the standard cosmological model with six parameters, likelihood surface is quite smooth and does not have local maxima, and sampling based methods like Markov Chain Monte Carlo (MCMC) method are quite successful. However, when there are a large number of parameters or the likelihood surface is not smooth, other methods may be more effective. In this paper, we have demonstrated application of another method inspired from artificial intelligence, called Particle Swarm Optimization (PSO) for estimating cosmological parameters from Cosmic Microwave Background (CMB) data taken from the WMAP satellite.
Reliabilities of genomic estimated breeding values in Danish Jersey
Thomasen, Jørn Rind; Guldbrandtsen, Bernt; Su, Guosheng;
2012-01-01
In order to optimize the use of genomic selection in breeding plans, it is essential to have reliable estimates of the genomic breeding values. This study investigated reliabilities of direct genomic values (DGVs) in the Jersey population estimated by three different methods. The validation methods...... of DGV. The data set consisted of 1003 Danish Jersey bulls with conventional estimated breeding values (EBVs) for 14 different traits included in the Nordic selection index. The bulls were genotyped for Single-nucleotide polymorphism (SNP) markers using the Illumina 54 K chip. A Bayesian method was used...... index pre-selection only. Averaged across traits, the estimates of reliability of DGVs ranged from 0.20 for validation on the most recent 3 years of bulls and up to 0.42 for expected reliabilities. Reliabilities from the cross-validation were on average 0.24. For the individual traits, the reliability...
Reliability Estimation for Double Containment Piping
L. Cadwallader; T. Pinna
2012-08-01
Double walled or double containment piping is considered for use in the ITER international project and other next-generation fusion device designs to provide an extra barrier for tritium gas and other radioactive materials. The extra barrier improves confinement of these materials and enhances safety of the facility. This paper describes some of the design challenges in designing double containment piping systems. There is also a brief review of a few operating experiences of double walled piping used with hazardous chemicals in different industries. This paper recommends approaches for the reliability analyst to use to quantify leakage from a double containment piping system in conceptual and more advanced designs. The paper also cites quantitative data that can be used to support such reliability analyses.
Application of spreadsheet to estimate infiltration parameters
Mohammad Zakwan
2016-09-01
Full Text Available Infiltration is the process of flow of water into the ground through the soil surface. Soil water although contributes a negligible fraction of total water present on earth surface, but is of utmost importance for plant life. Estimation of infiltration rates is of paramount importance for estimation of effective rainfall, groundwater recharge, and designing of irrigation systems. Numerous infiltration models are in use for estimation of infiltration rates. The conventional graphical approach for estimation of infiltration parameters often fails to estimate the infiltration parameters precisely. The generalised reduced gradient (GRG solver is reported to be a powerful tool for estimating parameters of nonlinear equations and it has, therefore, been implemented to estimate the infiltration parameters in the present paper. Field data of infiltration rate available in literature for sandy loam soils of Umuahia, Nigeria were used to evaluate the performance of GRG solver. A comparative study of graphical method and GRG solver shows that the performance of GRG solver is better than that of conventional graphical method for estimation of infiltration rates. Further, the performance of Kostiakov model has been found to be better than the Horton and Philip's model in most of the cases based on both the approaches of parameter estimation.
On Carleman estimates with two large parameters
Le Rousseau, Jerome, E-mail: jlr@univ-orleans.fr [Jerome Le Rousseau. Universite d' Orleans, Laboratoire Mathematiques et Applications, Physique Mathematique d' Orleans, CNRS UMR 6628, Federation Denis-Poisson, FR CNRS 2964, B.P. 6759, 45067 Orleans cedex 2 (France)
2011-04-01
We provide a general framework for the analysis and the derivation of Carleman estimates with two large parameters. For an appropriate form of weight functions strong pseudo-convexity conditions are shown to be necessary and sufficient.
Estimation of Modal Parameters and their Uncertainties
Andersen, P.; Brincker, Rune
1999-01-01
In this paper it is shown how to estimate the modal parameters as well as their uncertainties using the prediction error method of a dynamic system on the basis of uotput measurements only. The estimation scheme is assessed by means of a simulation study. As a part of the introduction, an example...
MODFLOW-style parameters in underdetermined parameter estimation
D'Oria, Marco D.; Fienen, Michael N.
2012-01-01
In this article, we discuss the use of MODFLOW-Style parameters in the numerical codes MODFLOW_2005 and MODFLOW_2005-Adjoint for the definition of variables in the Layer Property Flow package. Parameters are a useful tool to represent aquifer properties in both codes and are the only option available in the adjoint version. Moreover, for overdetermined parameter estimation problems, the parameter approach for model input can make data input easier. We found that if each estimable parameter is defined by one parameter, the codes require a large computational effort and substantial gains in efficiency are achieved by removing logical comparison of character strings that represent the names and types of the parameters. An alternative formulation already available in the current implementation of the code can also alleviate the efficiency degradation due to character comparisons in the special case of distributed parameters defined through multiplication matrices. The authors also hope that lessons learned in analyzing the performance of the MODFLOW family codes will be enlightening to developers of other Fortran implementations of numerical codes.
PARAMETER ESTIMATION IN BREAD BAKING MODEL
Hadiyanto Hadiyanto; AJB van Boxtel
2012-01-01
Bread product quality is highly dependent to the baking process. A model for the development of product quality, which was obtained by using quantitative and qualitative relationships, was calibrated by experiments at a fixed baking temperature of 200°C alone and in combination with 100 W microwave powers. The model parameters were estimated in a stepwise procedure i.e. first, heat and mass transfer related parameters, then the parameters related to product transformations and finally pro...
Parameter Estimation of Partial Differential Equation Models
Xun, Xiaolei
2013-09-01
Partial differential equation (PDE) models are commonly used to model complex dynamic systems in applied sciences such as biology and finance. The forms of these PDE models are usually proposed by experts based on their prior knowledge and understanding of the dynamic system. Parameters in PDE models often have interesting scientific interpretations, but their values are often unknown and need to be estimated from the measurements of the dynamic system in the presence of measurement errors. Most PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus the computational load is high. In this article, we propose two methods to estimate parameters in PDE models: a parameter cascading method and a Bayesian approach. In both methods, the underlying dynamic process modeled with the PDE model is represented via basis function expansion. For the parameter cascading method, we develop two nested levels of optimization to estimate the PDE parameters. For the Bayesian method, we develop a joint model for data and the PDE and develop a novel hierarchical model allowing us to employ Markov chain Monte Carlo (MCMC) techniques to make posterior inference. Simulation studies show that the Bayesian method and parameter cascading method are comparable, and both outperform other available methods in terms of estimation accuracy. The two methods are demonstrated by estimating parameters in a PDE model from long-range infrared light detection and ranging data. Supplementary materials for this article are available online. © 2013 American Statistical Association.
Parameter estimation methods for chaotic intercellular networks.
Mariño, Inés P; Ullner, Ekkehard; Zaikin, Alexey
2013-01-01
We have investigated simulation-based techniques for parameter estimation in chaotic intercellular networks. The proposed methodology combines a synchronization-based framework for parameter estimation in coupled chaotic systems with some state-of-the-art computational inference methods borrowed from the field of computational statistics. The first method is a stochastic optimization algorithm, known as accelerated random search method, and the other two techniques are based on approximate Bayesian computation. The latter is a general methodology for non-parametric inference that can be applied to practically any system of interest. The first method based on approximate Bayesian computation is a Markov Chain Monte Carlo scheme that generates a series of random parameter realizations for which a low synchronization error is guaranteed. We show that accurate parameter estimates can be obtained by averaging over these realizations. The second ABC-based technique is a Sequential Monte Carlo scheme. The algorithm generates a sequence of "populations", i.e., sets of randomly generated parameter values, where the members of a certain population attain a synchronization error that is lesser than the error attained by members of the previous population. Again, we show that accurate estimates can be obtained by averaging over the parameter values in the last population of the sequence. We have analysed how effective these methods are from a computational perspective. For the numerical simulations we have considered a network that consists of two modified repressilators with identical parameters, coupled by the fast diffusion of the autoinducer across the cell membranes.
Estimation on the Reliability of Farm Vehicle Based on Artificial Neural Network
WANG Jinwu
2008-01-01
As a peculiar product in China today, farm vehicles play an important role in economic construction and development of the countryside, but its work reliability remains low. In this paper truncated tracking was used to solve the low reliability of farm vehicles. Relevant reliability data were obtained by tracking a certain model vehicle and conducting reliability experiments. Data analysis revealed the weakest part of the vehicle system was the engine assembly. The theory of Artificial Neural Network was employed to estimate a parameter of the reliability model based on self-adaptive linear neural network, and the reliability function educed by the estimation could provide important theory references for reliability reassignment, manufacture and management of farm transport vehicles.
A Latent Class Approach to Estimating Test-Score Reliability
van der Ark, L. Andries; van der Palm, Daniel W.; Sijtsma, Klaas
2011-01-01
This study presents a general framework for single-administration reliability methods, such as Cronbach's alpha, Guttman's lambda-2, and method MS. This general framework was used to derive a new approach to estimating test-score reliability by means of the unrestricted latent class model. This new approach is the latent class reliability…
IRT-Estimated Reliability for Tests Containing Mixed Item Formats
Shu, Lianghua; Schwarz, Richard D.
2014-01-01
As a global measure of precision, item response theory (IRT) estimated reliability is derived for four coefficients (Cronbach's a, Feldt-Raju, stratified a, and marginal reliability). Models with different underlying assumptions concerning test-part similarity are discussed. A detailed computational example is presented for the targeted…
Statistics of Parameter Estimates: A Concrete Example
Aguilar, Oscar
2015-01-01
© 2015 Society for Industrial and Applied Mathematics. Most mathematical models include parameters that need to be determined from measurements. The estimated values of these parameters and their uncertainties depend on assumptions made about noise levels, models, or prior knowledge. But what can we say about the validity of such estimates, and the influence of these assumptions? This paper is concerned with methods to address these questions, and for didactic purposes it is written in the context of a concrete nonlinear parameter estimation problem. We will use the results of a physical experiment conducted by Allmaras et al. at Texas A&M University [M. Allmaras et al., SIAM Rev., 55 (2013), pp. 149-167] to illustrate the importance of validation procedures for statistical parameter estimation. We describe statistical methods and data analysis tools to check the choices of likelihood and prior distributions, and provide examples of how to compare Bayesian results with those obtained by non-Bayesian methods based on different types of assumptions. We explain how different statistical methods can be used in complementary ways to improve the understanding of parameter estimates and their uncertainties.
Uncertainty Analysis in the Noise Parameters Estimation
Pawlik P.
2012-07-01
Full Text Available The new approach to the uncertainty estimation in modelling acoustic hazards by means of the interval arithmetic is presented in the paper. In the case of the noise parameters estimation the selection of parameters specifying the acoustic wave propagation in an open space as well as parameters which are required in a form of average values – often constitutes a difficult problem. In such case, it is necessary to determine the variance and then, related strictly to it, the uncertainty of model parameters. The application of the interval arithmetic formalism allows to estimate the input data uncertainties without the necessity of the determination their probability distribution, which is required by other methods of uncertainty assessment. A successive problem in the acoustic hazards estimation is a lack of the exact knowledge of the input parameters. In connection with the above, the analysis of the modelling uncertainty in dependence of inaccuracy of model parameters was performed. To achieve this aim the interval arithmetic formalism – representing the value and its uncertainty in a form of an interval – was applied. The proposed approach was illustrated by the example of the application the Dutch RMR SRM Method, recommended by the European Union Directive 2002/49/WE, in the railway noise modelling.
Reliability Estimation of the Pultrusion Process Using the First-Order Reliability Method (FORM)
Baran, Ismet; Tutum, Cem C.; Hattel, Jesper H.
2013-01-01
In the present study the reliability estimation of the pultrusion process of a flat plate is analyzed by using the first order reliability method (FORM). The implementation of the numerical process model is validated by comparing the deterministic temperature and cure degree profiles with correspond
Interval Estimation of Seismic Hazard Parameters
Orlecka-Sikora, Beata; Lasocki, Stanislaw
2016-11-01
The paper considers Poisson temporal occurrence of earthquakes and presents a way to integrate uncertainties of the estimates of mean activity rate and magnitude cumulative distribution function in the interval estimation of the most widely used seismic hazard functions, such as the exceedance probability and the mean return period. The proposed algorithm can be used either when the Gutenberg-Richter model of magnitude distribution is accepted or when the nonparametric estimation is in use. When the Gutenberg-Richter model of magnitude distribution is used the interval estimation of its parameters is based on the asymptotic normality of the maximum likelihood estimator. When the nonparametric kernel estimation of magnitude distribution is used, we propose the iterated bias corrected and accelerated method for interval estimation based on the smoothed bootstrap and second-order bootstrap samples. The changes resulted from the integrated approach in the interval estimation of the seismic hazard functions with respect to the approach, which neglects the uncertainty of the mean activity rate estimates have been studied using Monte Carlo simulations and two real dataset examples. The results indicate that the uncertainty of mean activity rate affects significantly the interval estimates of hazard functions only when the product of activity rate and the time period, for which the hazard is estimated, is no more than 5.0. When this product becomes greater than 5.0, the impact of the uncertainty of cumulative distribution function of magnitude dominates the impact of the uncertainty of mean activity rate in the aggregated uncertainty of the hazard functions. Following, the interval estimates with and without inclusion of the uncertainty of mean activity rate converge. The presented algorithm is generic and can be applied also to capture the propagation of uncertainty of estimates, which are parameters of a multiparameter function, onto this function.
The CLICopti RF structure parameter estimator
Sjobak, Kyrre Ness
2014-01-01
This document describes the CLICopti RF structure parameter estimator. This is a C++ library which makes it possible to quickly estimate the parameters of an RF structure from its length, apertures, tapering, and basic cell type. Typical estimated parameters are the input power required to reach a certain voltage with a given beam current, the maximum safe pulse length for a given input power and the minimum bunch spacing in RF cycles allowed by a given long-range wake limit. The document describes the implemented physics, usage of the library through its Application Programming Interface (API) and the relation between the different parts of the library. Also discussed is how the library is checked for correctness, and the example programs included with the sources are described.
Parameter Estimation for Thurstone Choice Models
Vojnovic, Milan [London School of Economics (United Kingdom); Yun, Seyoung [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-04-24
We consider the estimation accuracy of individual strength parameters of a Thurstone choice model when each input observation consists of a choice of one item from a set of two or more items (so called top-1 lists). This model accommodates the well-known choice models such as the Luce choice model for comparison sets of two or more items and the Bradley-Terry model for pair comparisons. We provide a tight characterization of the mean squared error of the maximum likelihood parameter estimator. We also provide similar characterizations for parameter estimators defined by a rank-breaking method, which amounts to deducing one or more pair comparisons from a comparison of two or more items, assuming independence of these pair comparisons, and maximizing a likelihood function derived under these assumptions. We also consider a related binary classification problem where each individual parameter takes value from a set of two possible values and the goal is to correctly classify all items within a prescribed classification error. The results of this paper shed light on how the parameter estimation accuracy depends on given Thurstone choice model and the structure of comparison sets. In particular, we found that for unbiased input comparison sets of a given cardinality, when in expectation each comparison set of given cardinality occurs the same number of times, for a broad class of Thurstone choice models, the mean squared error decreases with the cardinality of comparison sets, but only marginally according to a diminishing returns relation. On the other hand, we found that there exist Thurstone choice models for which the mean squared error of the maximum likelihood parameter estimator can decrease much faster with the cardinality of comparison sets. We report empirical evaluation of some claims and key parameters revealed by theory using both synthetic and real-world input data from some popular sport competitions and online labor platforms.
Parameter Estimation of Turbo Code Encoder
Mehdi Teimouri
2014-01-01
Full Text Available The problem of reconstruction of a channel code consists of finding out its design parameters solely based on its output. This paper investigates the problem of reconstruction of parallel turbo codes. Reconstruction of a turbo code has been addressed in the literature assuming that some of the parameters of the turbo encoder, such as the number of input and output bits of the constituent encoders and puncturing pattern, are known. However in practical noncooperative situations, these parameters are unknown and should be estimated before applying reconstruction process. Considering such practical situations, this paper proposes a novel method to estimate the above-mentioned code parameters. The proposed algorithm increases the efficiency of the reconstruction process significantly by judiciously reducing the size of search space based on an analysis of the observed channel code output. Moreover, simulation results show that the proposed algorithm is highly robust against channel errors when it is fed with noisy observations.
LISA parameter estimation using numerical merger waveforms
Thorpe, J I; McWilliams, S T; Kelly, B J; Fahey, R P; Arnaud, K; Baker, J G, E-mail: James.I.Thorpe@nasa.go [NASA Goddard Space Flight Center, 8800 Greenbelt Rd, Greenbelt, MD 20771 (United States)
2009-05-07
Recent advances in numerical relativity provide a detailed description of the waveforms of coalescing massive black hole binaries (MBHBs), expected to be the strongest detectable LISA sources. We present a preliminary study of LISA's sensitivity to MBHB parameters using a hybrid numerical/analytic waveform for equal-mass, non-spinning holes. The Synthetic LISA software package is used to simulate the instrument response, and the Fisher information matrix method is used to estimate errors in the parameters. Initial results indicate that inclusion of the merger signal can significantly improve the precision of some parameter estimates. For example, the median parameter errors for an ensemble of systems with total redshifted mass of 10{sup 6} M{sub o-dot} at a redshift of z approx 1 were found to decrease by a factor of slightly more than two for signals with merger as compared to signals truncated at the Schwarzchild ISCO.
LISA parameter estimation using numerical merger waveforms
Thorpe, J I; Kelly, B J; Fahey, R P; Arnaud, K; Baker, J G
2008-01-01
Recent advances in numerical relativity provide a detailed description of the waveforms of coalescing massive black hole binaries (MBHBs), expected to be the strongest detectable LISA sources. We present a preliminary study of LISA's sensitivity to MBHB parameters using a hybrid numerical/analytic waveform for equal-mass, non-spinning holes. The Synthetic LISA software package is used to simulate the instrument response and the Fisher information matrix method is used to estimate errors in the parameters. Initial results indicate that inclusion of the merger signal can significantly improve the precision of some parameter estimates. For example, the median parameter errors for an ensemble of systems with total redshifted mass of one million Solar masses at a redshift of one were found to decrease by a factor of slightly more than two for signals with merger as compared to signals truncated at the Schwarzchild ISCO.
Parameter inference with estimated covariance matrices
Sellentin, Elena
2015-01-01
When inferring parameters from a Gaussian-distributed data set by computing a likelihood, a covariance matrix is needed that describes the data errors and their correlations. If the covariance matrix is not known a priori, it may be estimated and thereby becomes a random object with some intrinsic uncertainty itself. We show how to infer parameters in the presence of such an estimated covariance matrix, by marginalising over the true covariance matrix, conditioned on its estimated value. This leads to a likelihood function that is no longer Gaussian, but rather an adapted version of a multivariate $t$-distribution, which has the same numerical complexity as the multivariate Gaussian. As expected, marginalisation over the true covariance matrix improves inference when compared with Hartlap et al.'s method, which uses an unbiased estimate of the inverse covariance matrix but still assumes that the likelihood is Gaussian.
Parameter Estimation of Noise Corrupted Sinusoids
O'Brien, Francis J; Johnnie, Nathan
2011-01-01
Existing algorithms for fitting the parameters of a sinusoid to noisy discrete time observations are not always successful due to initial value sensitivity and other issues. This paper demonstrates the techniques of FIR filtering, Fast Fourier Transform, and nonlinear least squares minimization as useful in the parameter estimation of amplitude, frequency and phase exemplified for a low-frequency time-delayed sinusoid describing simple harmonic motion. Alternative means are described for estimating frequency and phase angle. An autocorrelation function for harmonic motion is also derived.
Hurst Parameter Estimation Using Artificial Neural Networks
S..Ledesma-Orozco
2011-08-01
Full Text Available The Hurst parameter captures the amount of long-range dependence (LRD in a time series. There are severalmethods to estimate the Hurst parameter, being the most popular: the variance-time plot, the R/S plot, theperiodogram, and Whittle’s estimator. The first three are graphical methods, and the estimation accuracy depends onhow the plot is interpreted and calculated. In contrast, Whittle’s estimator is based on a maximum likelihood techniqueand does not depend on a graph reading; however, it is computationally expensive. A new method to estimate theHurst parameter is proposed. This new method is based on an artificial neural network. Experimental results showthat this method outperforms traditional approaches, and can be used on applications where a fast and accurateestimate of the Hurst parameter is required, i.e., computer network traffic control. Additionally, the Hurst parameterwas computed on series of different length using several methods. The simulation results show that the proposedmethod is at least ten times faster than traditional methods.
Discriminative Parameter Estimation for Random Walks Segmentation
Baudin, Pierre-Yves; Goodman, Danny; Kumar, Puneet; Azzabou, Noura; Carlier, Pierre G.; Paragios, Nikos; Pawan Kumar, M.
2013-01-01
International audience; The Random Walks (RW) algorithm is one of the most e - cient and easy-to-use probabilistic segmentation methods. By combining contrast terms with prior terms, it provides accurate segmentations of medical images in a fully automated manner. However, one of the main drawbacks of using the RW algorithm is that its parameters have to be hand-tuned. we propose a novel discriminative learning framework that estimates the parameters using a training dataset. The main challen...
Unal, Resit; Morris, W. Douglas; White, Nancy H.; Lepsch, Roger A.
2004-01-01
This paper describes the development of a methodology for estimating reliability and maintainability distribution parameters for a reusable launch vehicle. A disciplinary analysis code and experimental designs are used to construct approximation models for performance characteristics. These models are then used in a simulation study to estimate performance characteristic distributions efficiently. The effectiveness and limitations of the developed methodology for launch vehicle operations simulations are also discussed.
Robust estimation of hydrological model parameters
A. Bárdossy
2008-11-01
Full Text Available The estimation of hydrological model parameters is a challenging task. With increasing capacity of computational power several complex optimization algorithms have emerged, but none of the algorithms gives a unique and very best parameter vector. The parameters of fitted hydrological models depend upon the input data. The quality of input data cannot be assured as there may be measurement errors for both input and state variables. In this study a methodology has been developed to find a set of robust parameter vectors for a hydrological model. To see the effect of observational error on parameters, stochastically generated synthetic measurement errors were applied to observed discharge and temperature data. With this modified data, the model was calibrated and the effect of measurement errors on parameters was analysed. It was found that the measurement errors have a significant effect on the best performing parameter vector. The erroneous data led to very different optimal parameter vectors. To overcome this problem and to find a set of robust parameter vectors, a geometrical approach based on Tukey's half space depth was used. The depth of the set of N randomly generated parameters was calculated with respect to the set with the best model performance (Nash-Sutclife efficiency was used for this study for each parameter vector. Based on the depth of parameter vectors, one can find a set of robust parameter vectors. The results show that the parameters chosen according to the above criteria have low sensitivity and perform well when transfered to a different time period. The method is demonstrated on the upper Neckar catchment in Germany. The conceptual HBV model was used for this study.
Parameter estimation methods for chaotic intercellular networks.
Inés P Mariño
Full Text Available We have investigated simulation-based techniques for parameter estimation in chaotic intercellular networks. The proposed methodology combines a synchronization-based framework for parameter estimation in coupled chaotic systems with some state-of-the-art computational inference methods borrowed from the field of computational statistics. The first method is a stochastic optimization algorithm, known as accelerated random search method, and the other two techniques are based on approximate Bayesian computation. The latter is a general methodology for non-parametric inference that can be applied to practically any system of interest. The first method based on approximate Bayesian computation is a Markov Chain Monte Carlo scheme that generates a series of random parameter realizations for which a low synchronization error is guaranteed. We show that accurate parameter estimates can be obtained by averaging over these realizations. The second ABC-based technique is a Sequential Monte Carlo scheme. The algorithm generates a sequence of "populations", i.e., sets of randomly generated parameter values, where the members of a certain population attain a synchronization error that is lesser than the error attained by members of the previous population. Again, we show that accurate estimates can be obtained by averaging over the parameter values in the last population of the sequence. We have analysed how effective these methods are from a computational perspective. For the numerical simulations we have considered a network that consists of two modified repressilators with identical parameters, coupled by the fast diffusion of the autoinducer across the cell membranes.
Parameter estimation for an expanding universe
Jieci Wang
2015-03-01
Full Text Available We study the parameter estimation for excitations of Dirac fields in the expanding Robertson–Walker universe. We employ quantum metrology techniques to demonstrate the possibility for high precision estimation for the volume rate of the expanding universe. We show that the optimal precision of the estimation depends sensitively on the dimensionless mass m˜ and dimensionless momentum k˜ of the Dirac particles. The optimal precision for the ratio estimation peaks at some finite dimensionless mass m˜ and momentum k˜. We find that the precision of the estimation can be improved by choosing the probe state as an eigenvector of the hamiltonian. This occurs because the largest quantum Fisher information is obtained by performing projective measurements implemented by the projectors onto the eigenvectors of specific probe states.
Parameter estimation of harmonic polluting industrial loads
Maza-Ortega, J.M.; Gomez-Exposito, A.; Trigo-Garcia, J.L.; Burgos-Payan, M. [University of Sevilla, Sevilla (Spain). Department of Electrical Engineering
2005-12-01
This paper develops a methodology for the estimation of relevant parameters characterizing harmonic polluting industrial loads through a set of measurements acquired at the point of common coupling. The proposed method is capable of obtaining an accurate load model in absence of detailed information about its internal structure and composition. (author)
Using Digital Filtration for Hurst Parameter Estimation
J. Prochaska
2009-06-01
Full Text Available We present a new method to estimate the Hurst parameter. The method exploits the form of the autocorrelation function for second-order self-similar processes and is based on one-pass digital filtration. We compare the performance and properties of the new method with that of the most common methods.
Sensor Placement for Modal Parameter Subset Estimation
Ulriksen, Martin Dalgaard; Bernal, Dionisio; Damkilde, Lars
2016-01-01
The present paper proposes an approach for deciding on sensor placements in the context of modal parameter estimation from vibration measurements. The approach is based on placing sensors, of which the amount is determined a priori, such that the minimum Fisher information that the frequency resp...
Discriminative parameter estimation for random walks segmentation.
Baudin, Pierre-Yves; Goodman, Danny; Kumrnar, Puneet; Azzabou, Noura; Carlier, Pierre G; Paragios, Nikos; Kumar, M Pawan
2013-01-01
The Random Walks (RW) algorithm is one of the most efficient and easy-to-use probabilistic segmentation methods. By combining contrast terms with prior terms, it provides accurate segmentations of medical images in a fully automated manner. However, one of the main drawbacks of using the RW algorithm is that its parameters have to be hand-tuned. we propose a novel discriminative learning framework that estimates the parameters using a training dataset. The main challenge we face is that the training samples are not fully supervised. Specifically, they provide a hard segmentation of the images, instead of a probabilistic segmentation. We overcome this challenge by treating the optimal probabilistic segmentation that is compatible with the given hard segmentation as a latent variable. This allows us to employ the latent support vector machine formulation for parameter estimation. We show that our approach significantly outperforms the baseline methods on a challenging dataset consisting of real clinical 3D MRI volumes of skeletal muscles.
Parameter estimation in stochastic differential equations
Bishwal, Jaya P N
2008-01-01
Parameter estimation in stochastic differential equations and stochastic partial differential equations is the science, art and technology of modelling complex phenomena and making beautiful decisions. The subject has attracted researchers from several areas of mathematics and other related fields like economics and finance. This volume presents the estimation of the unknown parameters in the corresponding continuous models based on continuous and discrete observations and examines extensively maximum likelihood, minimum contrast and Bayesian methods. Useful because of the current availability of high frequency data is the study of refined asymptotic properties of several estimators when the observation time length is large and the observation time interval is small. Also space time white noise driven models, useful for spatial data, and more sophisticated non-Markovian and non-semimartingale models like fractional diffusions that model the long memory phenomena are examined in this volume.
Jones, Mark Nicholas; Frutiger, Jerome; Abildskov, Jens;
We present a new software tool called SAFEPROPS which is able to estimate major safety-related and environmental properties for organic compounds. SAFEPROPS provides accurate, reliable and fast predictions using the Marrero-Gani group contribution (MG-GC) method. It is implemented using Python...... as the main programming language, while the necessary parameters together with their correlation matrix are obtained from a SQLite database which has been populated using off-line parameter and error estimation routines (Eq. 3-8)....
Parameter estimation and forecasting for multiplicative log-normal cascades.
Leövey, Andrés E; Lux, Thomas
2012-04-01
We study the well-known multiplicative log-normal cascade process in which the multiplication of Gaussian and log normally distributed random variables yields time series with intermittent bursts of activity. Due to the nonstationarity of this process and the combinatorial nature of such a formalism, its parameters have been estimated mostly by fitting the numerical approximation of the associated non-Gaussian probability density function to empirical data, cf. Castaing et al. [Physica D 46, 177 (1990)]. More recently, alternative estimators based upon various moments have been proposed by Beck [Physica D 193, 195 (2004)] and Kiyono et al. [Phys. Rev. E 76, 041113 (2007)]. In this paper, we pursue this moment-based approach further and develop a more rigorous generalized method of moments (GMM) estimation procedure to cope with the documented difficulties of previous methodologies. We show that even under uncertainty about the actual number of cascade steps, our methodology yields very reliable results for the estimated intermittency parameter. Employing the Levinson-Durbin algorithm for best linear forecasts, we also show that estimated parameters can be used for forecasting the evolution of the turbulent flow. We compare forecasting results from the GMM and Kiyono et al.'s procedure via Monte Carlo simulations. We finally test the applicability of our approach by estimating the intermittency parameter and forecasting of volatility for a sample of financial data from stock and foreign exchange markets.
Sensor Placement for Modal Parameter Subset Estimation
Ulriksen, Martin Dalgaard; Bernal, Dionisio; Damkilde, Lars
2016-01-01
The present paper proposes an approach for deciding on sensor placements in the context of modal parameter estimation from vibration measurements. The approach is based on placing sensors, of which the amount is determined a priori, such that the minimum Fisher information that the frequency...... responses carry on the selected modal parameter subset is, in some sense, maximized. The approach is validated in the context of a simple 10-DOF mass-spring-damper system by computing the variance of a set of identified modal parameters in a Monte Carlo setting for a set of sensor configurations, whose......). It is shown that the widely used Effective Independence (EI) method, which uses the modal amplitudes as surrogates for the parameters of interest, provides sensor configurations yielding theoretical lower bound variances whose maxima are up to 30 % larger than those obtained by use of the max-min approach....
Nonparametric estimation of location and scale parameters
Potgieter, C.J.
2012-12-01
Two random variables X and Y belong to the same location-scale family if there are constants μ and σ such that Y and μ+σX have the same distribution. In this paper we consider non-parametric estimation of the parameters μ and σ under minimal assumptions regarding the form of the distribution functions of X and Y. We discuss an approach to the estimation problem that is based on asymptotic likelihood considerations. Our results enable us to provide a methodology that can be implemented easily and which yields estimators that are often near optimal when compared to fully parametric methods. We evaluate the performance of the estimators in a series of Monte Carlo simulations. © 2012 Elsevier B.V. All rights reserved.
Parameter estimation in channel network flow simulation
Han Longxi
2008-01-01
Simulations of water flow in channel networks require estimated values of roughness for all the individual channel segments that make up a network. When the number of individual channel segments is large, the parameter calibration workload is substantial and a high level of uncertainty in estimated roughness cannot be avoided. In this study, all the individual channel segments are graded according to the factors determining the value of roughness. It is assumed that channel segments with the same grade have the same value of roughness. Based on observed hydrological data, an optimal model for roughness estimation is built. The procedure of solving the optimal problem using the optimal model is described. In a test of its efficacy, this estimation method was applied successfully in the simulation of tidal water flow in a large complicated channel network in the lower reach of the Yangtze River in China.
Power Network Parameter Estimation Method Based on Data Mining Technology
ZHANG Qi-ping; WANG Cheng-min; HOU Zhi-fian
2008-01-01
The parameter values which actually change with the circumstances, weather and load level etc.produce great effect to the result of state estimation. A new parameter estimation method based on data mining technology was proposed. The clustering method was used to classify the historical data in supervisory control and data acquisition (SCADA) database as several types. The data processing technology was impliedto treat the isolated point, missing data and yawp data in samples for classified groups. The measurement data which belong to each classification were introduced to the linear regression equation in order to gain the regression coefficient and actual parameters by the least square method. A practical system demonstrates the high correctness, reliability and strong practicability of the proposed method.
On closure parameter estimation in chaotic systems
J. Hakkarainen
2012-02-01
Full Text Available Many dynamical models, such as numerical weather prediction and climate models, contain so called closure parameters. These parameters usually appear in physical parameterizations of sub-grid scale processes, and they act as "tuning handles" of the models. Currently, the values of these parameters are specified mostly manually, but the increasing complexity of the models calls for more algorithmic ways to perform the tuning. Traditionally, parameters of dynamical systems are estimated by directly comparing the model simulations to observed data using, for instance, a least squares approach. However, if the models are chaotic, the classical approach can be ineffective, since small errors in the initial conditions can lead to large, unpredictable deviations from the observations. In this paper, we study numerical methods available for estimating closure parameters in chaotic models. We discuss three techniques: off-line likelihood calculations using filtering methods, the state augmentation method, and the approach that utilizes summary statistics from long model simulations. The properties of the methods are studied using a modified version of the Lorenz 95 system, where the effect of fast variables are described using a simple parameterization.
Quantum Estimation of Parameters of Classical Spacetimes
Downes, T G; Knill, E; Milburn, G J; Caves, C M
2016-01-01
We describe a quantum limit to measurement of classical spacetimes. Specifically, we formulate a quantum Cramer-Rao lower bound for estimating the single parameter in any one-parameter family of spacetime metrics. We employ the locally covariant formulation of quantum field theory in curved spacetime, which allows for a manifestly background-independent derivation. The result is an uncertainty relation that applies to all globally hyperbolic spacetimes. Among other examples, we apply our method to detection of gravitational waves using the electromagnetic field as a probe, as in laser-interferometric gravitational-wave detectors. Other applications are discussed, from terrestrial gravimetry to cosmology.
Parameter estimation using B-Trees
Schmidt, Albrecht; Bøhlen, Michael H.
2004-01-01
This paper presents a method for accelerating algorithms for computing common statistical operations like parameter estimation or sampling on B-Tree indexed data; the work was carried out in the context of visualisation of large scientific data sets. The underlying idea is the following: the shape...... at opportunities and limitations of this approach for visualisation of large data sets. The advantages of the method are manifold. Not only does it enable advanced algorithms through a performance boost for basic operations like density estimation, but it also builds on functionality that is already present...
Renal parameter estimates in unrestrained dogs
Rader, R. D.; Stevens, C. M.
1974-01-01
A mathematical formulation has been developed to describe the hemodynamic parameters of a conceptualized kidney model. The model was developed by considering regional pressure drops and regional storage capacities within the renal vasculature. Estimation of renal artery compliance, pre- and postglomerular resistance, and glomerular filtration pressure is feasible by considering mean levels and time derivatives of abdominal aortic pressure and renal artery flow. Changes in the smooth muscle tone of the renal vessels induced by exogenous angiotensin amide, acetylcholine, and by the anaesthetic agent halothane were estimated by use of the model. By employing totally implanted telemetry, the technique was applied on unrestrained dogs to measure renal resistive and compliant parameters while the dogs were being subjected to obedience training, to avoidance reaction, and to unrestrained caging.
Bayesian parameter estimation for effective field theories
Wesolowski, S; Furnstahl, R J; Phillips, D R; Thapaliya, A
2015-01-01
We present procedures based on Bayesian statistics for effective field theory (EFT) parameter estimation from data. The extraction of low-energy constants (LECs) is guided by theoretical expectations that supplement such information in a quantifiable way through the specification of Bayesian priors. A prior for natural-sized LECs reduces the possibility of overfitting, and leads to a consistent accounting of different sources of uncertainty. A set of diagnostic tools are developed that analyze the fit and ensure that the priors do not bias the EFT parameter estimation. The procedures are illustrated using representative model problems and the extraction of LECs for the nucleon mass expansion in SU(2) chiral perturbation theory from synthetic lattice data.
Rapid Compact Binary Coalescence Parameter Estimation
Pankow, Chris; Brady, Patrick; O'Shaughnessy, Richard; Ochsner, Evan; Qi, Hong
2016-03-01
The first observation run with second generation gravitational-wave observatories will conclude at the beginning of 2016. Given their unprecedented and growing sensitivity, the benefit of prompt and accurate estimation of the orientation and physical parameters of binary coalescences is obvious in its coupling to electromagnetic astrophysics and observations. Popular Bayesian schemes to measure properties of compact object binaries use Markovian sampling to compute the posterior. While very successful, in some cases, convergence is delayed until well after the electromagnetic fluence has subsided thus diminishing the potential science return. With this in mind, we have developed a scheme which is also Bayesian and simply parallelizable across all available computing resources, drastically decreasing convergence time to a few tens of minutes. In this talk, I will emphasize the complementary use of results from low latency gravitational-wave searches to improve computational efficiency and demonstrate the capabilities of our parameter estimation framework with a simulated set of binary compact object coalescences.
CosmoSIS: modular cosmological parameter estimation
Zuntz, Joe; Jennings, Elise; Rudd, Douglas; Manzotti, Alessandro; Dodelson, Scott; Bridle, Sarah; Sehrish, Saba; Kowalkowski, James
2014-01-01
Cosmological parameter estimation is entering a new era. Large collaborations need to coordinate high-stakes analyses using multiple methods; furthermore such analyses have grown in complexity due to sophisticated models of cosmology and systematic uncertainties. In this paper we argue that modularity is the key to addressing these challenges: calculations should be broken up into interchangeable modular units with inputs and outputs clearly defined. We present a new framework for cosmological parameter estimation, CosmoSIS, designed to connect together, share, and advance development of inference tools across the community. We describe the modules already available in CosmoSIS, including CAMB, Planck, cosmic shear calculations, and a suite of samplers. We illustrate it using demonstration code that you can run out-of-the-box with the installer available at http://bitbucket.org/joezuntz/cosmosis
Bayesian parameter estimation for effective field theories
Wesolowski, S.; Klco, N.; Furnstahl, R. J.; Phillips, D. R.; Thapaliya, A.
2016-07-01
We present procedures based on Bayesian statistics for estimating, from data, the parameters of effective field theories (EFTs). The extraction of low-energy constants (LECs) is guided by theoretical expectations in a quantifiable way through the specification of Bayesian priors. A prior for natural-sized LECs reduces the possibility of overfitting, and leads to a consistent accounting of different sources of uncertainty. A set of diagnostic tools is developed that analyzes the fit and ensures that the priors do not bias the EFT parameter estimation. The procedures are illustrated using representative model problems, including the extraction of LECs for the nucleon-mass expansion in SU(2) chiral perturbation theory from synthetic lattice data.
Parameter estimation and investigation of a bolted joint model
Shiryayev, O. V.; Page, S. M.; Pettit, C. L.; Slater, J. C.
2007-11-01
Mechanical joints are a primary source of variability in the dynamics of built-up structures. Physical phenomena in the joint are quite complex and therefore too impractical to model at the micro-scale. This motivates the development of lumped parameter joint models with discrete interfaces so that they can be easily implemented in finite element codes. Among the most important considerations in choosing a model for dynamically excited systems is its ability to model energy dissipation. This translates into the need for accurate and reliable methods to measure model parameters and estimate their inherent variability from experiments. The adjusted Iwan model was identified as a promising candidate for representing joint dynamics. Recent research focused on this model has exclusively employed impulse excitation in conjunction with neural networks to identify the model parameters. This paper presents an investigation of an alternative parameter estimation approach for the adjusted Iwan model, which employs data from oscillatory forcing. This approach is shown to produce parameter estimates with precision similar to the impulse excitation method for a range of model parameters.
Optimal design criteria - prediction vs. parameter estimation
Waldl, Helmut
2014-05-01
G-optimality is a popular design criterion for optimal prediction, it tries to minimize the kriging variance over the whole design region. A G-optimal design minimizes the maximum variance of all predicted values. If we use kriging methods for prediction it is self-evident to use the kriging variance as a measure of uncertainty for the estimates. Though the computation of the kriging variance and even more the computation of the empirical kriging variance is computationally very costly and finding the maximum kriging variance in high-dimensional regions can be time demanding such that we cannot really find the G-optimal design with nowadays available computer equipment in practice. We cannot always avoid this problem by using space-filling designs because small designs that minimize the empirical kriging variance are often non-space-filling. D-optimality is the design criterion related to parameter estimation. A D-optimal design maximizes the determinant of the information matrix of the estimates. D-optimality in terms of trend parameter estimation and D-optimality in terms of covariance parameter estimation yield basically different designs. The Pareto frontier of these two competing determinant criteria corresponds with designs that perform well under both criteria. Under certain conditions searching the G-optimal design on the above Pareto frontier yields almost as good results as searching the G-optimal design in the whole design region. In doing so the maximum of the empirical kriging variance has to be computed only a few times though. The method is demonstrated by means of a computer simulation experiment based on data provided by the Belgian institute Management Unit of the North Sea Mathematical Models (MUMM) that describe the evolution of inorganic and organic carbon and nutrients, phytoplankton, bacteria and zooplankton in the Southern Bight of the North Sea.
Errors on errors - Estimating cosmological parameter covariance
Joachimi, Benjamin
2014-01-01
Current and forthcoming cosmological data analyses share the challenge of huge datasets alongside increasingly tight requirements on the precision and accuracy of extracted cosmological parameters. The community is becoming increasingly aware that these requirements not only apply to the central values of parameters but, equally important, also to the error bars. Due to non-linear effects in the astrophysics, the instrument, and the analysis pipeline, data covariance matrices are usually not well known a priori and need to be estimated from the data itself, or from suites of large simulations. In either case, the finite number of realisations available to determine data covariances introduces significant biases and additional variance in the errors on cosmological parameters in a standard likelihood analysis. Here, we review recent work on quantifying these biases and additional variances and discuss approaches to remedy these effects.
Online Dynamic Parameter Estimation of Synchronous Machines
West, Michael R.
Traditionally, synchronous machine parameters are determined through an offline characterization procedure. The IEEE 115 standard suggests a variety of mechanical and electrical tests to capture the fundamental characteristics and behaviors of a given machine. These characteristics and behaviors can be used to develop and understand machine models that accurately reflect the machine's performance. To perform such tests, the machine is required to be removed from service. Characterizing a machine offline can result in economic losses due to down time, labor expenses, etc. Such losses may be mitigated by implementing online characterization procedures. Historically, different approaches have been taken to develop methods of calculating a machine's electrical characteristics, without removing the machine from service. Using a machine's input and response data combined with a numerical algorithm, a machine's characteristics can be determined. This thesis explores such characterization methods and strives to compare the IEEE 115 standard for offline characterization with the least squares approximation iterative approach implemented on a 20 h.p. synchronous machine. This least squares estimation method of online parameter estimation shows encouraging results for steady-state parameters, in comparison with steady-state parameters obtained through the IEEE 115 standard.
Parameter estimation, model reduction and quantum filtering
Chase, Bradley A.
This thesis explores the topics of parameter estimation and model reduction in the context of quantum filtering. The last is a mathematically rigorous formulation of continuous quantum measurement, in which a stream of auxiliary quantum systems is used to infer the state of a target quantum system. Fundamental quantum uncertainties appear as noise which corrupts the probe observations and therefore must be filtered in order to extract information about the target system. This is analogous to the classical filtering problem in which techniques of inference are used to process noisy observations of a system in order to estimate its state. Given the clear similarities between the two filtering problems, I devote the beginning of this thesis to a review of classical and quantum probability theory, stochastic calculus and filtering. This allows for a mathematically rigorous and technically adroit presentation of the quantum filtering problem and solution. Given this foundation, I next consider the related problem of quantum parameter estimation, in which one seeks to infer the strength of a parameter that drives the evolution of a probe quantum system. By embedding this problem in the state estimation problem solved by the quantum filter, I present the optimal Bayesian estimator for a parameter when given continuous measurements of the probe system to which it couples. For cases when the probe takes on a finite number of values, I review a set of sufficient conditions for asymptotic convergence of the estimator. For a continuous-valued parameter, I present a computational method called quantum particle filtering for practical estimation of the parameter. Using these methods, I then study the particular problem of atomic magnetometry and review an experimental method for potentially reducing the uncertainty in the estimate of the magnetic field beyond the standard quantum limit. The technique involves double-passing a probe laser field through the atomic system, giving
Parameter estimation and model selection in computational biology.
Gabriele Lillacci
2010-03-01
Full Text Available A central challenge in computational modeling of biological systems is the determination of the model parameters. Typically, only a fraction of the parameters (such as kinetic rate constants are experimentally measured, while the rest are often fitted. The fitting process is usually based on experimental time course measurements of observables, which are used to assign parameter values that minimize some measure of the error between these measurements and the corresponding model prediction. The measurements, which can come from immunoblotting assays, fluorescent markers, etc., tend to be very noisy and taken at a limited number of time points. In this work we present a new approach to the problem of parameter selection of biological models. We show how one can use a dynamic recursive estimator, known as extended Kalman filter, to arrive at estimates of the model parameters. The proposed method follows. First, we use a variation of the Kalman filter that is particularly well suited to biological applications to obtain a first guess for the unknown parameters. Secondly, we employ an a posteriori identifiability test to check the reliability of the estimates. Finally, we solve an optimization problem to refine the first guess in case it should not be accurate enough. The final estimates are guaranteed to be statistically consistent with the measurements. Furthermore, we show how the same tools can be used to discriminate among alternate models of the same biological process. We demonstrate these ideas by applying our methods to two examples, namely a model of the heat shock response in E. coli, and a model of a synthetic gene regulation system. The methods presented are quite general and may be applied to a wide class of biological systems where noisy measurements are used for parameter estimation or model selection.
Maximized Reliability Estimates for Some Research Scales of the MMPI.
Wagner, Edwin E.; And Others
1990-01-01
This study, using data for 200 psychiatric/chemical dependency patients, attempted to justify subscales of the Minnesota Multiphasic Personality Inventory (MMPI). Distributions of all possible split-half correlations for certain research scales of the MMPI revealed negative skewness resulting in spuriously lowered reliability estimates. The scales…
Estimating the Reliability of a Test Containing Multiple Item Formats.
Qualls, Audrey L.
1995-01-01
Classically parallel, tau-equivalently parallel, and congenerically parallel models representing various degrees of part-test parallelism and their appropriateness for tests composed of multiple item formats are discussed. An appropriate reliability estimate for a test with multiple item formats is presented and illustrated. (SLD)
PARAMETER ESTIMATION IN BREAD BAKING MODEL
Hadiyanto Hadiyanto
2012-05-01
Full Text Available Bread product quality is highly dependent to the baking process. A model for the development of product quality, which was obtained by using quantitative and qualitative relationships, was calibrated by experiments at a fixed baking temperature of 200°C alone and in combination with 100 W microwave powers. The model parameters were estimated in a stepwise procedure i.e. first, heat and mass transfer related parameters, then the parameters related to product transformations and finally product quality parameters. There was a fair agreement between the calibrated model results and the experimental data. The results showed that the applied simple qualitative relationships for quality performed above expectation. Furthermore, it was confirmed that the microwave input is most meaningful for the internal product properties and not for the surface properties as crispness and color. The model with adjusted parameters was applied in a quality driven food process design procedure to derive a dynamic operation pattern, which was subsequently tested experimentally to calibrate the model. Despite the limited calibration with fixed operation settings, the model predicted well on the behavior under dynamic convective operation and on combined convective and microwave operation. It was expected that the suitability between model and baking system could be improved further by performing calibration experiments at higher temperature and various microwave power levels. Abstrak PERKIRAAN PARAMETER DALAM MODEL UNTUK PROSES BAKING ROTI. Kualitas produk roti sangat tergantung pada proses baking yang digunakan. Suatu model yang telah dikembangkan dengan metode kualitatif dan kuantitaif telah dikalibrasi dengan percobaan pada temperatur 200oC dan dengan kombinasi dengan mikrowave pada 100 Watt. Parameter-parameter model diestimasi dengan prosedur bertahap yaitu pertama, parameter pada model perpindahan masa dan panas, parameter pada model transformasi, dan
Synchronous Generator Model Parameter Estimation Based on Noisy Dynamic Waveforms
Berhausen, Sebastian; Paszek, Stefan
2016-01-01
In recent years, there have occurred system failures in many power systems all over the world. They have resulted in a lack of power supply to a large number of recipients. To minimize the risk of occurrence of power failures, it is necessary to perform multivariate investigations, including simulations, of power system operating conditions. To conduct reliable simulations, the current base of parameters of the models of generating units, containing the models of synchronous generators, is necessary. In the paper, there is presented a method for parameter estimation of a synchronous generator nonlinear model based on the analysis of selected transient waveforms caused by introducing a disturbance (in the form of a pseudorandom signal) in the generator voltage regulation channel. The parameter estimation was performed by minimizing the objective function defined as a mean square error for deviations between the measurement waveforms and the waveforms calculated based on the generator mathematical model. A hybrid algorithm was used for the minimization of the objective function. In the paper, there is described a filter system used for filtering the noisy measurement waveforms. The calculation results of the model of a 44 kW synchronous generator installed on a laboratory stand of the Institute of Electrical Engineering and Computer Science of the Silesian University of Technology are also given. The presented estimation method can be successfully applied to parameter estimation of different models of high-power synchronous generators operating in a power system.
Parameter estimation in tree graph metabolic networks
Laura Astola
2016-09-01
Full Text Available We study the glycosylation processes that convert initially toxic substrates to nutritionally valuable metabolites in the flavonoid biosynthesis pathway of tomato (Solanum lycopersicum seedlings. To estimate the reaction rates we use ordinary differential equations (ODEs to model the enzyme kinetics. A popular choice is to use a system of linear ODEs with constant kinetic rates or to use Michaelis–Menten kinetics. In reality, the catalytic rates, which are affected among other factors by kinetic constants and enzyme concentrations, are changing in time and with the approaches just mentioned, this phenomenon cannot be described. Another problem is that, in general these kinetic coefficients are not always identifiable. A third problem is that, it is not precisely known which enzymes are catalyzing the observed glycosylation processes. With several hundred potential gene candidates, experimental validation using purified target proteins is expensive and time consuming. We aim at reducing this task via mathematical modeling to allow for the pre-selection of most potential gene candidates. In this article we discuss a fast and relatively simple approach to estimate time varying kinetic rates, with three favorable properties: firstly, it allows for identifiable estimation of time dependent parameters in networks with a tree-like structure. Secondly, it is relatively fast compared to usually applied methods that estimate the model derivatives together with the network parameters. Thirdly, by combining the metabolite concentration data with a corresponding microarray data, it can help in detecting the genes related to the enzymatic processes. By comparing the estimated time dynamics of the catalytic rates with time series gene expression data we may assess potential candidate genes behind enzymatic reactions. As an example, we show how to apply this method to select prominent glycosyltransferase genes in tomato seedlings.
Uncertainty relation based on unbiased parameter estimations
Sun, Liang-Liang; Song, Yong-Shun; Qiao, Cong-Feng; Yu, Sixia; Chen, Zeng-Bing
2017-02-01
Heisenberg's uncertainty relation has been extensively studied in spirit of its well-known original form, in which the inaccuracy measures used exhibit some controversial properties and don't conform with quantum metrology, where the measurement precision is well defined in terms of estimation theory. In this paper, we treat the joint measurement of incompatible observables as a parameter estimation problem, i.e., estimating the parameters characterizing the statistics of the incompatible observables. Our crucial observation is that, in a sequential measurement scenario, the bias induced by the first unbiased measurement in the subsequent measurement can be eradicated by the information acquired, allowing one to extract unbiased information of the second measurement of an incompatible observable. In terms of Fisher information we propose a kind of information comparison measure and explore various types of trade-offs between the information gains and measurement precisions, which interpret the uncertainty relation as surplus variance trade-off over individual perfect measurements instead of a constraint on extracting complete information of incompatible observables.
Toward unbiased estimations of the statefinder parameters
Aviles, Alejandro; Klapp, Jaime; Luongo, Orlando
2017-09-01
With the use of simulated supernova catalogs, we show that the statefinder parameters turn out to be poorly and biased estimated by standard cosmography. To this end, we compute their standard deviations and several bias statistics on cosmologies near the concordance model, demonstrating that these are very large, making standard cosmography unsuitable for future and wider compilations of data. To overcome this issue, we propose a new method that consists in introducing the series of the Hubble function into the luminosity distance, instead of considering the usual direct Taylor expansions of the luminosity distance. Moreover, in order to speed up the numerical computations, we estimate the coefficients of our expansions in a hierarchical manner, in which the order of the expansion depends on the redshift of every single piece of data. In addition, we propose two hybrids methods that incorporates standard cosmography at low redshifts. The methods presented here perform better than the standard approach of cosmography both in the errors and bias of the estimated statefinders. We further propose a one-parameter diagnostic to reject non-viable methods in cosmography.
eGFR is a reliable preoperative renal function parameter in patients with gastric cancer
Takayuki; Kosuge; Tokihiko; Sawada; Yoshimi; Iwasaki; Junji; Kita; Mitsugi; Shimoda; Nobumi; Tagaya; Keiichi; Kubota
2010-01-01
AIM: To evaluate the validity of the estimated glomerular filtration rate (eGFR) as a preoperative renal function parameter in patients with gastric cancer. METHODS: A retrospective study was conducted in 147 patients with gastric cancer. Preoperative creatinine clearance (Ccr), eGFR, and preand postoperative serum creatinine (sCr) data were examined. Preoperative Ccr and eGFR were then compared for their reliability in predicting postoperative renal dysfunction. RESULTS: Among 110 patients with normal preo...
Parameter estimation for lithium ion batteries
Santhanagopalan, Shriram
With an increase in the demand for lithium based batteries at the rate of about 7% per year, the amount of effort put into improving the performance of these batteries from both experimental and theoretical perspectives is increasing. There exist a number of mathematical models ranging from simple empirical models to complicated physics-based models to describe the processes leading to failure of these cells. The literature is also rife with experimental studies that characterize the various properties of the system in an attempt to improve the performance of lithium ion cells. However, very little has been done to quantify the experimental observations and relate these results to the existing mathematical models. In fact, the best of the physics based models in the literature show as much as 20% discrepancy when compared to experimental data. The reasons for such a big difference include, but are not limited to, numerical complexities involved in extracting parameters from experimental data and inconsistencies in interpreting directly measured values for the parameters. In this work, an attempt has been made to implement simplified models to extract parameter values that accurately characterize the performance of lithium ion cells. The validity of these models under a variety of experimental conditions is verified using a model discrimination procedure. Transport and kinetic properties are estimated using a non-linear estimation procedure. The initial state of charge inside each electrode is also maintained as an unknown parameter, since this value plays a significant role in accurately matching experimental charge/discharge curves with model predictions and is not readily known from experimental data. The second part of the dissertation focuses on parameters that change rapidly with time. For example, in the case of lithium ion batteries used in Hybrid Electric Vehicle (HEV) applications, the prediction of the State of Charge (SOC) of the cell under a variety of
Reliability of SEMG spike parameters during concentric contractions.
Gabriel, D A
2000-01-01
This study examined the reliability of four surface electromyographic (SEMG) spike parameters during concentric (isotonic) contractions: mean spike amplitude, mean spike frequency, mean spike slope, and the mean number of peaks per spike. Eighteen subjects performed rapid elbow flexion on a horizontal angular displacement device that was used to measure joint torque. The SEMG activity of the biceps brachii was monitored with Beckman Ag/AgCl electrodes. The testing schedule consisted of four hundred trials distributed equally over four sessions. The stability of the means across sessions and the consistency of scores within subjects was determined for the first five (1-5) and last five (96-100) trials of each session to examine the possible influence of a "warm up" effect. All measures exhibited a significant (p contractions enhanced the ability to recruit more fast-twitch motor units across test days.
Li, Yunlong; Wang, Xiaojun; Wang, Lei; Fan, Weichao; Qiu, Zhiping
2017-01-01
A systematic non-probabilistic reliability analysis procedure for structural vibration active control system with unknown-but-bounded parameters is proposed. The state-space representation of active vibration control system with uncertain parameters is presented. Compared with the robust control theory, which is always over-conservative, the reliability-based analysis method is more suitable to deal with uncertain problem. Stability is the core of the closed-loop feedback control system design, so stability criterion is adopted to act as the limited state function for reliability analysis. The uncertain parameters without enough samples are modeled as interval variables. Interval perturbation method is employed to estimate the interval bounds of eigenvalues, which can be used to characterize the stability of the closed-loop active control system. Formulation of defining the reliability of active control system based on stability is discussed. A novel non-probabilistic reliability measurement index is discussed and used to determine the probability of the stability based on the area ratio. The feasibility and efficiency of the proposed method are demonstrated by two numerical examples.
Reliability of radiographic parameters in adults with hip dysplasia
Terjesen, Terje [Oslo University Hospital, Rikshospitalet, Department of Orthopaedics, Oslo (Norway); Gunderson, Ragnhild B. [Oslo University Hospital, Rikshospitalet, Department of Radiology, Oslo (Norway)
2012-07-15
To assess the reliability of radiographic measurements in adults previously treated for developmental dysplasia of the hip (DDH) and to clarify whether these parameters differ according to position of the patient (supine versus standing). Fifty-one patients (41 females and 10 males) with 63 affected hips were included in the study. The mean follow-up period was 45 (44-49) years in the patients who had not undergone total hip replacement (THR). Anteroposterior radiographs of the pelvis were taken with the patient in the supine and in the standing position. Measurements used for residual hip dysplasia were center-edge (CE) angle and migration percentage (MP). The joint space width (JSW) was measured at three or four locations of the upper, weight-bearing part of the joint, and the shortest distance was termed the minimum joint space width (minJSW). One radiologist and one orthopaedic surgeon, each with more than 30 years of experience, independently measured the radiographic parameters. The limits of agreement (LOA) of the CE angle (mean interobserver difference {+-} 2SD) were within the range -8 to 7 . The LOA of the MP were in the range -8 to 8% and of the minJSW -0.6 to 1.1 mm. The mean differences in CE angle between supine and standing radiographs (supine - standing) ranged from -1.1 to 0.0 and the mean differences in MP between supine and standing positions were below 1%. The mean positional differences in minJSW were below 0.1 mm and were not statistically significant. The interobserver variations with regard to CE angle, MP, and minJSW were moderate, indicating that these are reliable measurements in clinical practice. Femoral head coverage and JSW did not significantly differ between supine and weight-bearing positions. (orig.)
Improving gravitational-wave parameter estimation using Gaussian process regression
Moore, Christopher J; Chua, Alvin J K; Gair, Jonathan R
2015-01-01
Folding uncertainty in theoretical models into Bayesian parameter estimation is necessary in order to make reliable inferences. A general means of achieving this is by marginalising over model uncertainty using a prior distribution constructed using Gaussian process regression (GPR). Here, we apply this technique to (simulated) gravitational-wave signals from binary black holes that could be observed using advanced-era gravitational-wave detectors. Unless properly accounted for, uncertainty in the gravitational-wave templates could be the dominant source of error in studies of these systems. We explain our approach in detail and provide proofs of various features of the method, including the limiting behaviour for high signal-to-noise, where systematic model uncertainties dominate over noise errors. We find that the marginalised likelihood constructed via GPR offers a significant improvement in parameter estimation over the standard, uncorrected likelihood. We also examine the dependence of the method on the ...
PARAMETER ESTIMATION OF THE HYBRID CENSORED LOMAX DISTRIBUTION
Samir Kamel Ashour
2010-12-01
Full Text Available Survival analysis is used in various fields for analyzing data involving the duration between two events. It is also known as event history analysis, lifetime data analysis, reliability analysis or time to event analysis. One of the difficulties which arise in this area is the presence of censored data. The lifetime of an individual is censored when it cannot be exactly measured but partial information is available. Different circumstances can produce different types of censoring. The two most common censoring schemes used in life testing experiments are Type-I and Type-II censoring schemes. Hybrid censoring scheme is mixture of Type-I and Type-II censoring scheme. In this paper we consider the estimation of parameters of Lomax distribution based on hybrid censored data. The parameters are estimated by the maximum likelihood and Bayesian methods. The Fisher information matrix has been obtained and it can be used for constructing asymptotic confidence intervals.
Composite likelihood estimation of demographic parameters
Garrigan Daniel
2009-11-01
Full Text Available Abstract Background Most existing likelihood-based methods for fitting historical demographic models to DNA sequence polymorphism data to do not scale feasibly up to the level of whole-genome data sets. Computational economies can be achieved by incorporating two forms of pseudo-likelihood: composite and approximate likelihood methods. Composite likelihood enables scaling up to large data sets because it takes the product of marginal likelihoods as an estimator of the likelihood of the complete data set. This approach is especially useful when a large number of genomic regions constitutes the data set. Additionally, approximate likelihood methods can reduce the dimensionality of the data by summarizing the information in the original data by either a sufficient statistic, or a set of statistics. Both composite and approximate likelihood methods hold promise for analyzing large data sets or for use in situations where the underlying demographic model is complex and has many parameters. This paper considers a simple demographic model of allopatric divergence between two populations, in which one of the population is hypothesized to have experienced a founder event, or population bottleneck. A large resequencing data set from human populations is summarized by the joint frequency spectrum, which is a matrix of the genomic frequency spectrum of derived base frequencies in two populations. A Bayesian Metropolis-coupled Markov chain Monte Carlo (MCMCMC method for parameter estimation is developed that uses both composite and likelihood methods and is applied to the three different pairwise combinations of the human population resequence data. The accuracy of the method is also tested on data sets sampled from a simulated population model with known parameters. Results The Bayesian MCMCMC method also estimates the ratio of effective population size for the X chromosome versus that of the autosomes. The method is shown to estimate, with reasonable
Toward unbiased estimations of the statefinder parameters
Aviles, Alejandro; Luongo, Orlando
2016-01-01
With the use of simulated supernova catalogs, we show that the statefinder parameters turn out to be poorly and biased estimated by standard cosmography. To this end, we compute their standard deviations and several bias statistics on cosmologies near the concordance model, demonstrating that these are very large, making standard cosmography unsuitable for future and wider compilations of data. To overcome this issue, we propose a new method that consists in introducing the series of the Hubble function into the luminosity distance, instead of considering the usual direct Taylor expansions of the luminosity distance. Moreover, in order to speed up the numerical computations, we estimate the coefficients of our expansions in a hierarchical manner, in which the order of the expansion depends on the redshift of every single piece of data. In addition, we propose two hybrids methods that incorporates standard cosmography at low redshifts. The methods presented here perform better than the standard approach of cos...
An Algorithm for Motion Parameter Direct Estimate
Roberto Caldelli
2004-06-01
Full Text Available Motion estimation in image sequences is undoubtedly one of the most studied research fields, given that motion estimation is a basic tool for disparate applications, ranging from video coding to pattern recognition. In this paper a new methodology which, by minimizing a specific potential function, directly determines for each image pixel the motion parameters of the object the pixel belongs to is presented. The approach is based on Markov random fields modelling, acting on a first-order neighborhood of each point and on a simple motion model that accounts for rotations and translations. Experimental results both on synthetic (noiseless and noisy and real world sequences have been carried out and they demonstrate the good performance of the adopted technique. Furthermore a quantitative and qualitative comparison with other well-known approaches has confirmed the goodness of the proposed methodology.
Estimation of Model Parameters for Steerable Needles
Park, Wooram; Reed, Kyle B.; Okamura, Allison M.; Chirikjian, Gregory S.
2010-01-01
Flexible needles with bevel tips are being developed as useful tools for minimally invasive surgery and percutaneous therapy. When such a needle is inserted into soft tissue, it bends due to the asymmetric geometry of the bevel tip. This insertion with bending is not completely repeatable. We characterize the deviations in needle tip pose (position and orientation) by performing repeated needle insertions into artificial tissue. The base of the needle is pushed at a constant speed without rotating, and the covariance of the distribution of the needle tip pose is computed from experimental data. We develop the closed-form equations to describe how the covariance varies with different model parameters. We estimate the model parameters by matching the closed-form covariance and the experimentally obtained covariance. In this work, we use a needle model modified from a previously developed model with two noise parameters. The modified needle model uses three noise parameters to better capture the stochastic behavior of the needle insertion. The modified needle model provides an improvement of the covariance error from 26.1% to 6.55%. PMID:21643451
Estimation of Model Parameters for Steerable Needles.
Park, Wooram; Reed, Kyle B; Okamura, Allison M; Chirikjian, Gregory S
2010-01-01
Flexible needles with bevel tips are being developed as useful tools for minimally invasive surgery and percutaneous therapy. When such a needle is inserted into soft tissue, it bends due to the asymmetric geometry of the bevel tip. This insertion with bending is not completely repeatable. We characterize the deviations in needle tip pose (position and orientation) by performing repeated needle insertions into artificial tissue. The base of the needle is pushed at a constant speed without rotating, and the covariance of the distribution of the needle tip pose is computed from experimental data. We develop the closed-form equations to describe how the covariance varies with different model parameters. We estimate the model parameters by matching the closed-form covariance and the experimentally obtained covariance. In this work, we use a needle model modified from a previously developed model with two noise parameters. The modified needle model uses three noise parameters to better capture the stochastic behavior of the needle insertion. The modified needle model provides an improvement of the covariance error from 26.1% to 6.55%.
Parameter Estimation in Active Plate Structures
Araujo, A. L.; Lopes, H. M. R.; Vaz, M. A. P.
2006-01-01
In this paper two non-destructive methods for elastic and piezoelectric parameter estimation in active plate structures with surface bonded piezoelectric patches are presented. These methods rely on experimental undamped natural frequencies of free vibration. The first solves the inverse problem...... through gradient based optimization techniques, while the second is based on a metamodel of the inverse problem, using artificial neural networks. A numerical higher order finite element laminated plate model is used in both methods and results are compared and discussed through a simulated...
Dhatt, Sharmistha
2016-01-01
Reliability of kinetic parameters are crucial in understanding enzyme kinetics within cellular system. The present study suggests a few cautions that need introspection for estimation of parameters like K(M), V(max) and K(I) using Lineweaver-Burk plots. The quality of IC(50) too needs a thorough reinvestigation because of its direct link with K(I) and K(M) values. Inhibition kinetics under both steady-state and non-steady-state conditions are studied and errors in estimated parameters are compared against actual values to settle the question of their adequacy.
Reliable Estimation of Prediction Uncertainty for Physicochemical Property Models.
Proppe, Jonny; Reiher, Markus
2017-07-11
One of the major challenges in computational science is to determine the uncertainty of a virtual measurement, that is the prediction of an observable based on calculations. As highly accurate first-principles calculations are in general unfeasible for most physical systems, one usually resorts to parameteric property models of observables, which require calibration by incorporating reference data. The resulting predictions and their uncertainties are sensitive to systematic errors such as inconsistent reference data, parametric model assumptions, or inadequate computational methods. Here, we discuss the calibration of property models in the light of bootstrapping, a sampling method that can be employed for identifying systematic errors and for reliable estimation of the prediction uncertainty. We apply bootstrapping to assess a linear property model linking the (57)Fe Mössbauer isomer shift to the contact electron density at the iron nucleus for a diverse set of 44 molecular iron compounds. The contact electron density is calculated with 12 density functionals across Jacob's ladder (PWLDA, BP86, BLYP, PW91, PBE, M06-L, TPSS, B3LYP, B3PW91, PBE0, M06, TPSSh). We provide systematic-error diagnostics and reliable, locally resolved uncertainties for isomer-shift predictions. Pure and hybrid density functionals yield average prediction uncertainties of 0.06-0.08 mm s(-1) and 0.04-0.05 mm s(-1), respectively, the latter being close to the average experimental uncertainty of 0.02 mm s(-1). Furthermore, we show that both model parameters and prediction uncertainty depend significantly on the composition and number of reference data points. Accordingly, we suggest that rankings of density functionals based on performance measures (e.g., the squared coefficient of correlation, r(2), or the root-mean-square error, RMSE) should not be inferred from a single data set. This study presents the first statistically rigorous calibration analysis for theoretical M
Objectivity, Reliability, and Validity of Search Engine Count Estimates
Dietmar Janetzko
2008-01-01
Full Text Available Count estimates ("hits" provided by Web search engines have received much attention as a yardstick to measure a variety of phenomena of interest as diverse as, e.g., language statistics, popularity of authors, or similarity between words. Common to these activities is the intention to use Web search engines not only for search but for ad hoc measurement. Using search engine count estimates (SECEs in this way means that a phenomenon of interest, e.g., the popularity of an author, is conceived of as a measurand, and SECEs are taken to be its quantitative measures. However, the data quality of SECEs has not yet been studied systematically, and concerns have been raised against the use of this kind of data. This article examines the data quality of SECEs focusing on classical goodness criteria, i.e., objectivity, reliability, and validity. The results of a series of studies indicate that with the exception of Boolean queries that use disjunction or negation objectivity as well as test-retest reliability and parallel-test reliability of SECEs is good for most types of browsers and search engines examined. Estimation of validity required model development (all-subsets regression revealing satisfying results by using an explorative approach to feature selection. The ﬁndings are discussed in the light of previous objections and perspectives for using Web search count estimates are delineated.
Eldred, Michael Scott; Subia, Samuel Ramirez; Neckels, David; Hopkins, Matthew Morgan; Notz, Patrick K.; Adams, Brian M.; Carnes, Brian; Wittwer, Jonathan W.; Bichon, Barron J.; Copps, Kevin D.
2006-10-01
This report documents the results for an FY06 ASC Algorithms Level 2 milestone combining error estimation and adaptivity, uncertainty quantification, and probabilistic design capabilities applied to the analysis and design of bistable MEMS. Through the use of error estimation and adaptive mesh refinement, solution verification can be performed in an automated and parameter-adaptive manner. The resulting uncertainty analysis and probabilistic design studies are shown to be more accurate, efficient, reliable, and convenient.
Estimating Infiltration Parameters from Basic Soil Properties
van de Genachte, G.; Mallants, D.; Ramos, J.; Deckers, J. A.; Feyen, J.
1996-05-01
Infiltration data were collected on two rectangular grids with 25 sampling points each. Both experimental grids were located in tropical rain forest (Guyana), the first in an Arenosol area and the second in a Ferralsol field. Four different infiltration models were evaluated based on their performance in describing the infiltration data. The model parameters were estimated using non-linear optimization techniques. The infiltration behaviour in the Ferralsol was equally well described by the equations of Philip, Green-Ampt, Kostiakov and Horton. For the Arenosol, the equations of Philip, Green-Ampt and Horton were significantly better than the Kostiakov model. Basic soil properties such as textural composition (percentage sand, silt and clay), organic carbon content, dry bulk density, porosity, initial soil water content and root content were also determined for each sampling point of the two grids. The fitted infiltration parameters were then estimated based on other soil properties using multiple regression. Prior to the regression analysis, all predictor variables were transformed to normality. The regression analysis was performed using two information levels. The first information level contained only three texture fractions for the Ferralsol (sand, silt and clay) and four fractions for the Arenosol (coarse, medium and fine sand, and silt and clay). At the first information level the regression models explained up to 60% of the variability of some of the infiltration parameters for the Ferralsol field plot. At the second information level the complete textural analysis was used (nine fractions for the Ferralsol and six for the Arenosol). At the second information level a principal components analysis (PCA) was performed prior to the regression analysis to overcome the problem of multicollinearity among the predictor variables. Regression analysis was then carried out using the orthogonally transformed soil properties as the independent variables. Results for
Fast cosmological parameter estimation using neural networks
Auld, T; Hobson, M P; Gull, S F
2006-01-01
We present a method for accelerating the calculation of CMB power spectra, matter power spectra and likelihood functions for use in cosmological parameter estimation. The algorithm, called CosmoNet, is based on training a multilayer perceptron neural network and shares all the advantages of the recently released Pico algorithm of Fendt & Wandelt, but has several additional benefits in terms of simplicity, computational speed, memory requirements and ease of training. We demonstrate the capabilities of CosmoNet by computing CMB power spectra over a box in the parameter space of flat \\Lambda CDM models containing the 3\\sigma WMAP1 confidence region. We also use CosmoNet to compute the WMAP3 likelihood for flat \\Lambda CDM models and show that marginalised posteriors on parameters derived are very similar to those obtained using CAMB and the WMAP3 code. We find that the average error in the power spectra is typically 2-3% of cosmic variance, and that CosmoNet is \\sim 7 \\times 10^4 faster than CAMB (for flat ...
Cosmological parameter estimation: impact of CMB aberration
Catena, Riccardo
2012-01-01
The peculiar motion of an observer with respect to the CMB rest frame induces an apparent deflection of the observed CMB photons, i.e. aberration, and a shift in their frequency, i.e. Doppler effect. Both effects distort the temperature multipoles a_lm's via a mixing matrix at any l. The common lore when performing a CMB based cosmological parameter estimation is to consider that Doppler affects only the l=1 multipole, and neglect any other corrections. In this paper we reconsider the validity of this assumption, showing that it is actually not robust when sky cuts are included to model CMB foreground contaminations. Assuming a simple fiducial cosmological model with five parameters, we simulated CMB temperature maps of the sky in a WMAP-like and in a Planck-like experiment and added aberration and Doppler effects to the maps. We then analyzed with a MCMC in a Bayesian framework the maps with and without aberration and Doppler effects in order to assess the ability of reconstructing the parameters of the fidu...
Iterative procedure for camera parameters estimation using extrinsic matrix decomposition
Goshin, Yegor V.; Fursov, Vladimir A.
2016-03-01
This paper addresses the problem of 3D scene reconstruction in cases when the extrinsic parameters (rotation and translation) of the camera are unknown. This problem is both important and urgent because the accuracy of the camera parameters significantly influences the resulting 3D model. A common approach is to determine the fundamental matrix from corresponding points on two views of a scene and then to use singular value decomposition for camera projection matrix estimation. However, this common approach is very sensitive to fundamental matrix errors. In this paper we propose a novel approach in which camera parameters are determined directly from the equations of the projective transformation by using corresponding points on the views. The proposed decomposition allows us to use an iterative procedure for determining the parameters of the camera. This procedure is implemented in two steps: the translation determination and the rotation determination. The experimental results of the camera parameters estimation and 3D scene reconstruction demonstrate the reliability of the proposed approach.
Noncoherent sampling technique for communications parameter estimations
Su, Y. T.; Choi, H. J.
1985-01-01
This paper presents a method of noncoherent demodulation of the PSK signal for signal distortion analysis at the RF interface. The received RF signal is downconverted and noncoherently sampled for further off-line processing. Any mismatch in phase and frequency is then compensated for by the software using the estimation techniques to extract the baseband waveform, which is needed in measuring various signal parameters. In this way, various kinds of modulated signals can be treated uniformly, independent of modulation format, and additional distortions introduced by the receiver or the hardware measurement instruments can thus be eliminated. Quantization errors incurred by digital sampling and ensuing software manipulations are analyzed and related numerical results are presented also.
Parameter estimation in LISA Pathfinder operational exercises
Nofrarias, Miquel; Congedo, Giuseppe; Hueller, Mauro; Armano, M; Diaz-Aguilo, M; Grynagier, A; Hewitson, M
2011-01-01
The LISA Pathfinder data analysis team has been developing in the last years the infrastructure and methods required to run the mission during flight operations. These are gathered in the LTPDA toolbox, an object oriented MATLAB toolbox that allows all the data analysis functionalities for the mission, while storing the history of all operations performed to the data, thus easing traceability and reproducibility of the analysis. The parameter estimation methods in the toolbox have been applied recently to data sets generated with the OSE (Off-line Simulations Environment), a detailed LISA Pathfinder non-linear simulator that will serve as a reference simulator during mission operations. These operational exercises aim at testing the on-orbit experiments in a realistic environment in terms of software and time constraints. These simulations, so called operational exercises, are the last verification step before translating these experiments into tele-command sequences for the spacecraft, producing therefore ve...
Noncoherent sampling technique for communications parameter estimations
Su, Y. T.; Choi, H. J.
1985-01-01
This paper presents a method of noncoherent demodulation of the PSK signal for signal distortion analysis at the RF interface. The received RF signal is downconverted and noncoherently sampled for further off-line processing. Any mismatch in phase and frequency is then compensated for by the software using the estimation techniques to extract the baseband waveform, which is needed in measuring various signal parameters. In this way, various kinds of modulated signals can be treated uniformly, independent of modulation format, and additional distortions introduced by the receiver or the hardware measurement instruments can thus be eliminated. Quantization errors incurred by digital sampling and ensuing software manipulations are analyzed and related numerical results are presented also.
Experimental design for parameter estimation of gene regulatory networks.
Bernhard Steiert
Full Text Available Systems biology aims for building quantitative models to address unresolved issues in molecular biology. In order to describe the behavior of biological cells adequately, gene regulatory networks (GRNs are intensively investigated. As the validity of models built for GRNs depends crucially on the kinetic rates, various methods have been developed to estimate these parameters from experimental data. For this purpose, it is favorable to choose the experimental conditions yielding maximal information. However, existing experimental design principles often rely on unfulfilled mathematical assumptions or become computationally demanding with growing model complexity. To solve this problem, we combined advanced methods for parameter and uncertainty estimation with experimental design considerations. As a showcase, we optimized three simulated GRNs in one of the challenges from the Dialogue for Reverse Engineering Assessment and Methods (DREAM. This article presents our approach, which was awarded the best performing procedure at the DREAM6 Estimation of Model Parameters challenge. For fast and reliable parameter estimation, local deterministic optimization of the likelihood was applied. We analyzed identifiability and precision of the estimates by calculating the profile likelihood. Furthermore, the profiles provided a way to uncover a selection of most informative experiments, from which the optimal one was chosen using additional criteria at every step of the design process. In conclusion, we provide a strategy for optimal experimental design and show its successful application on three highly nonlinear dynamic models. Although presented in the context of the GRNs to be inferred for the DREAM6 challenge, the approach is generic and applicable to most types of quantitative models in systems biology and other disciplines.
Energy parameter estimation in solar powered wireless sensor networks
Mousa, Mustafa
2014-02-24
The operation of solar powered wireless sensor networks is associated with numerous challenges. One of the main challenges is the high variability of solar power input and battery capacity, due to factors such as weather, humidity, dust and temperature. In this article, we propose a set of tools that can be implemented onboard high power wireless sensor networks to estimate the battery condition and capacity as well as solar power availability. These parameters are very important to optimize sensing and communications operations and maximize the reliability of the complete system. Experimental results show that the performance of typical Lithium Ion batteries severely degrades outdoors in a matter of weeks or months, and that the availability of solar energy in an urban solar powered wireless sensor network is highly variable, which underlines the need for such power and energy estimation algorithms.
Reliable estimation of orbit errors in spaceborne SAR interferometry
Bähr, H.; Hanssen, R.F.
2012-01-01
An approach to improve orbital state vectors by orbit error estimates derived from residual phase patterns in synthetic aperture radar interferograms is presented. For individual interferograms, an error representation by two parameters is motivated: the baseline error in cross-range and the rate of
Probabilistic confidence for decisions based on uncertain reliability estimates
Reid, Stuart G.
2013-05-01
Reliability assessments are commonly carried out to provide a rational basis for risk-informed decisions concerning the design or maintenance of engineering systems and structures. However, calculated reliabilities and associated probabilities of failure often have significant uncertainties associated with the possible estimation errors relative to the 'true' failure probabilities. For uncertain probabilities of failure, a measure of 'probabilistic confidence' has been proposed to reflect the concern that uncertainty about the true probability of failure could result in a system or structure that is unsafe and could subsequently fail. The paper describes how the concept of probabilistic confidence can be applied to evaluate and appropriately limit the probabilities of failure attributable to particular uncertainties such as design errors that may critically affect the dependability of risk-acceptance decisions. This approach is illustrated with regard to the dependability of structural design processes based on prototype testing with uncertainties attributable to sampling variability.
Numerical Model based Reliability Estimation of Selective Laser Melting Process
Mohanty, Sankhya; Hattel, Jesper Henri
2014-01-01
Selective laser melting is developing into a standard manufacturing technology with applications in various sectors. However, the process is still far from being at par with conventional processes such as welding and casting, the primary reason of which is the unreliability of the process. While...... of the selective laser melting process. A validated 3D finite-volume alternating-direction-implicit numerical technique is used to model the selective laser melting process, and is calibrated against results from single track formation experiments. Correlation coefficients are determined for process input...... parameters such as laser power, speed, beam profile, etc. Subsequently, uncertainties in the processing parameters are utilized to predict a range for the various outputs, using a Monte Carlo method based uncertainty analysis methodology, and the reliability of the process is established....
Multiple nonlinear parameter estimation using PI feedback control
Lith, van P. F.; Witteveen, H.; Betlem, B.H.L.; Roffel, B.
2001-01-01
Nonlinear parameters often need to be estimated during the building of chemical process models. To accomplish this, many techniques are available. This paper discusses an alternative view to parameter estimation, where the concept of PI feedback control is used to estimate model parameters. The appr
Availability and Reliability of FSO Links Estimated from Visibility
M. Tatarko
2012-06-01
Full Text Available This paper is focused on estimation availability and reliability of FSO systems. Shortcut FSO means Free Space Optics. It is a system which allows optical transmission between two steady points. We can say that it is a last mile communication system. It is an optical communication system, but the propagation media is air. This solution of last mile does not require expensive optical fiber and establishing of connection is very simple. But there are some drawbacks which have a bad influence of quality of services and availability of the link. Number of phenomena in the atmosphere such as scattering, absorption and turbulence cause a large variation of receiving optical power and laser beam attenuation. The influence of absorption and turbulence can be significantly reduced by an appropriate design of FSO link. But the visibility has the main influence on quality of the optical transmission channel. Thus, in typical continental area where rain, snow or fog occurs is important to know their values. This article gives a description of device for measuring weather conditions and information about estimation of availability and reliability of FSO links in Slovakia.
Estimation of high altitude Martian dust parameters
Pabari, Jayesh; Bhalodi, Pinali
2016-07-01
Dust devils are known to occur near the Martian surface mostly during the mid of Southern hemisphere summer and they play vital role in deciding background dust opacity in the atmosphere. The second source of high altitude Martian dust could be due to the secondary ejecta caused by impacts on Martian Moons, Phobos and Deimos. Also, the surfaces of the Moons are charged positively due to ultraviolet rays from the Sun and negatively due to space plasma currents. Such surface charging may cause fine grains to be levitated, which can easily escape the Moons. It is expected that the escaping dust form dust rings within the orbits of the Moons and therefore also around the Mars. One more possible source of high altitude Martian dust is interplanetary in nature. Due to continuous supply of the dust from various sources and also due to a kind of feedback mechanism existing between the ring or tori and the sources, the dust rings or tori can sustain over a period of time. Recently, very high altitude dust at about 1000 km has been found by MAVEN mission and it is expected that the dust may be concentrated at about 150 to 500 km. However, it is mystery how dust has reached to such high altitudes. Estimation of dust parameters before-hand is necessary to design an instrument for the detection of high altitude Martian dust from a future orbiter. In this work, we have studied the dust supply rate responsible primarily for the formation of dust ring or tori, the life time of dust particles around the Mars, the dust number density as well as the effect of solar radiation pressure and Martian oblateness on dust dynamics. The results presented in this paper may be useful to space scientists for understanding the scenario and designing an orbiter based instrument to measure the dust surrounding the Mars for solving the mystery. The further work is underway.
Bayesian approach to decompression sickness model parameter estimation.
Howle, L E; Weber, P W; Nichols, J M
2017-03-01
We examine both maximum likelihood and Bayesian approaches for estimating probabilistic decompression sickness model parameters. Maximum likelihood estimation treats parameters as fixed values and determines the best estimate through repeated trials, whereas the Bayesian approach treats parameters as random variables and determines the parameter probability distributions. We would ultimately like to know the probability that a parameter lies in a certain range rather than simply make statements about the repeatability of our estimator. Although both represent powerful methods of inference, for models with complex or multi-peaked likelihoods, maximum likelihood parameter estimates can prove more difficult to interpret than the estimates of the parameter distributions provided by the Bayesian approach. For models of decompression sickness, we show that while these two estimation methods are complementary, the credible intervals generated by the Bayesian approach are more naturally suited to quantifying uncertainty in the model parameters.
Fatigue reliability based on residual strength model with hybrid uncertain parameters
Jun Wang; Zhi-Ping Qiu
2012-01-01
The aim of this paper is to evaluate the fatigue reliability with hybrid uncertain parameters based on a residual strength model.By solving the non-probabilistic setbased reliability problem and analyzing the reliability with randomness,the fatigue reliability with hybrid parameters can be obtained.The presented hybrid model can adequately consider all uncertainties affecting the fatigue reliability with hybrid uncertain parameters.A comparison among the presented hybrid model,non-probabilistic set-theoretic model and the conventional random model is made through two typical numerical examples.The results show that the presented hybrid model,which can ensure structural security,is effective and practical.
Helsel, D.R.; Gilliom, R.J.
1986-01-01
Estimates of distributional parameters (mean, standard deviation, median, interquartile range) are often desired for data sets containing censored observations. Eight methods for estimating these parameters have been evaluated by R. J. Gilliom and D. R. Helsel (this issue) using Monte Carlo simulations. To verify those findings, the same methods are now applied to actual water quality data. The best method (lowest root-mean-squared error (rmse)) over all parameters, sample sizes, and censoring levels is log probability regression (LR), the method found best in the Monte Carlo simulations. Best methods for estimating moment or percentile parameters separately are also identical to the simulations. Reliability of these estimates can be expressed as confidence intervals using rmse and bias values taken from the simulation results. Finally, a new simulation study shows that best methods for estimating uncensored sample statistics from censored data sets are identical to those for estimating population parameters.
Reliability of fish size estimates obtained from multibeam imaging sonar
Hightower, Joseph E.; Magowan, Kevin J.; Brown, Lori M.; Fox, Dewayne A.
2013-01-01
Multibeam imaging sonars have considerable potential for use in fisheries surveys because the video-like images are easy to interpret, and they contain information about fish size, shape, and swimming behavior, as well as characteristics of occupied habitats. We examined images obtained using a dual-frequency identification sonar (DIDSON) multibeam sonar for Atlantic sturgeon Acipenser oxyrinchus oxyrinchus, striped bass Morone saxatilis, white perch M. americana, and channel catfish Ictalurus punctatus of known size (20–141 cm) to determine the reliability of length estimates. For ranges up to 11 m, percent measurement error (sonar estimate – total length)/total length × 100 varied by species but was not related to the fish's range or aspect angle (orientation relative to the sonar beam). Least-square mean percent error was significantly different from 0.0 for Atlantic sturgeon (x̄ = −8.34, SE = 2.39) and white perch (x̄ = 14.48, SE = 3.99) but not striped bass (x̄ = 3.71, SE = 2.58) or channel catfish (x̄ = 3.97, SE = 5.16). Underestimating lengths of Atlantic sturgeon may be due to difficulty in detecting the snout or the longer dorsal lobe of the heterocercal tail. White perch was the smallest species tested, and it had the largest percent measurement errors (both positive and negative) and the lowest percentage of images classified as good or acceptable. Automated length estimates for the four species using Echoview software varied with position in the view-field. Estimates tended to be low at more extreme azimuthal angles (fish's angle off-axis within the view-field), but mean and maximum estimates were highly correlated with total length. Software estimates also were biased by fish images partially outside the view-field and when acoustic crosstalk occurred (when a fish perpendicular to the sonar and at relatively close range is detected in the side lobes of adjacent beams). These sources of
Makram KRIT
2016-01-01
Full Text Available This paper presents several iterative methods based on Stochastic Expectation-Maximization (EM methodology in order to estimate parametric reliability models for randomly lifetime data. The methodology is related to Maximum Likelihood Estimates (MLE in the case of missing data. A bathtub form of failure intensity formulation of a repairable system reliability is presented where the estimation of its parameters is considered through EM algorithm . Field of failures data from industrial site are used to fit the model. Finally, the interval estimation basing on large-sample in literature is discussed and the examination of the actual coverage probabilities of these confidence intervals is presented using Monte Carlo simulation method.
Ways to increase the reliability of earthquake loss estimations in emergency mode
Frolova, Nina; Bonnin, Jean; Larionov, Valeri; Ugarov, Aleksander
2016-04-01
The lessons of earthquake disasters in Nepal, China, Indonesia, India, Haiti, Turkey and many others show that authorities in charge of emergency response are most often lacking prompt and reliable information on the disaster itself and its secondary effects. Timely and adequate action just after a strong earthquake can result in significant benefits in saving lives and other benefits, especially, in densely populated areas with high level of industrialization. The reliability of rough and rapid information provided by "global systems" (i.e. systems operated without consideration on wherever the earthquake has occurred), in emergency mode is strongly dependent on many factors dealt with input data and simulation models used in such systems. The paper analyses the different factors contribution to the total "error" of fatality estimation in emergency mode. Examples of four strong events in Nepal, Italy, China, Italy allowed to make a conclusion that the reliability of loss estimations is first of all influenced by the uncertainties in event parameters determination (coordinates, magnitude, source depth); this factors' group rating is the highest; as the degree of influence on reliability of loss estimations is equal to about 50%. The second place is taken by the factors' group responsible for macroseismic field simulation; the degree of influence of the group errors is about 30%. The last place is taken by group of factors, which describes the built environment distribution and regional vulnerability functions; the factors' group contributes about 20% to the error of loss estimation. Ways to minimize the influence of different factors on the reliability of loss assessment in near real time are proposed. The first one is to determine the rating of seismological surveys for different zones in attempting to decrease uncertainties in the earthquake parameters input determination in emergency mode. The second one is to "calibrate" the "global systems" drawing advantage
Orlov A. I.
2015-05-01
Full Text Available According to the new paradigm of applied mathematical statistics one should prefer non-parametric methods and models. However, in applied statistics we currently use a variety of parametric models. The term "parametric" means that the probabilistic-statistical model is fully described by a finite-dimensional vector of fixed dimension, and this dimension does not depend on the size of the sample. In parametric statistics the estimation problem is to estimate the unknown value (for statistician of parameter by means of the best (in some sense method. In the statistical problems of standardization and quality control we use a three-parameter family of gamma distributions. In this article, it is considered as an example of the parametric distribution family. We compare the methods for estimating the parameters. The method of moments is universal. However, the estimates obtained with the help of method of moments have optimal properties only in rare cases. Maximum likelihood estimation (MLE belongs to the class of the best asymptotically normal estimates. In most cases, analytical solutions do not exist; therefore, to find MLE it is necessary to apply numerical methods. However, the use of numerical methods creates numerous problems. Convergence of iterative algorithms requires justification. In a number of examples of the analysis of real data, the likelihood function has many local maxima, and because of that natural iterative procedures do not converge. We suggest the use of one-step estimates (OS-estimates. They have equally good asymptotic properties as the maximum likelihood estimators, under the same conditions of regularity that MLE. One-step estimates are written in the form of explicit formulas. In this article it is proved that the one-step estimates are the best asymptotically normal estimates (under natural conditions. We have found OS-estimates for the gamma distribution and given the results of calculations using data on operating time
Linear parameter estimation of rational biokinetic functions
Doeswijk, T.G.; Keesman, K.J.
2009-01-01
For rational biokinetic functions such as the Michaelis-Menten equation, in general, a nonlinear least-squares method is a good estimator. However, a major drawback of a nonlinear least-squares estimator is that it can end up in a local minimum. Rearranging and linearizing rational biokinetic
Estimation of Small s-t Reliabilities in Acyclic Networks
Laumanns, Marco
2007-01-01
In the classical s-t network reliability problem a fixed network G is given including two designated vertices s and t (called terminals). The edges are subject to independent random failure, and the task is to compute the probability that s and t are connected in the resulting network, which is known to be #P-complete. In this paper we are interested in approximating the s-t reliability in case of a directed acyclic original network G. We introduce and analyze a specialized version of the Monte-Carlo algorithm given by Karp and Luby. For the case of uniform edge failure probabilities, we give a worst-case bound on the number of samples that have to be drawn to obtain an epsilon-delta approximation, being sharper than the original upper bound. We also derive a variance reduction of the estimator which reduces the expected number of iterations to perform to achieve the desired accuracy when applied in conjunction with different stopping rules. Initial computational results on two types of random networks (direc...
Predicting of parameters of reliability two-cascade thermoelectrical cooling device in a mode Q0max
Khramova L. F.
2009-10-01
Full Text Available The parities for an estimation of parameters of reliability two-cascade ТED of a various design depending on their basic parameters in a mode Q0max are received. The results of accounts for differences of temperature ΔТ=60, 70, 80 K in view of thermal loading and temperature dependence of parameters, and also in a mode of the thermal pump (at ΔТ=0 are given. The estimation maximal coldproductivity two-cascade TED of a various design for various conditions of operation is carried out.
Bayesian parameter estimation by continuous homodyne detection
Kiilerich, Alexander Holm; Molmer, Klaus
2016-01-01
and we show that the ensuing transient evolution is more sensitive to system parameters than the steady state of the system. The parameter sensitivity can be quantified by the Fisher information, and we investigate numerically and analytically how the temporal noise correlations in the measurement signal......We simulate the process of continuous homodyne detection of the radiative emission from a quantum system, and we investigate how a Bayesian analysis can be employed to determine unknown parameters that govern the system evolution. Measurement backaction quenches the system dynamics at all times...
Bayesian parameter estimation by continuous homodyne detection
Kiilerich, Alexander Holm; Mølmer, Klaus
2016-09-01
We simulate the process of continuous homodyne detection of the radiative emission from a quantum system, and we investigate how a Bayesian analysis can be employed to determine unknown parameters that govern the system evolution. Measurement backaction quenches the system dynamics at all times and we show that the ensuing transient evolution is more sensitive to system parameters than the steady state of the system. The parameter sensitivity can be quantified by the Fisher information, and we investigate numerically and analytically how the temporal noise correlations in the measurement signal contribute to the ultimate sensitivity limit of homodyne detection.
Mohamed Mahmoud Mohamed
2016-09-01
Full Text Available In this paper we develop approximate Bayes estimators of the parameters,reliability, and hazard rate functions of the Logistic distribution by using Lindley’sapproximation, based on progressively type-II censoring samples. Noninformativeprior distributions are used for the parameters. Quadratic, linexand general Entropy loss functions are used. The statistical performances of theBayes estimates relative to quadratic, linex and general entropy loss functionsare compared to those of the maximum likelihood based on simulation study.
Baker Syed; Poskar C; Junker Björn
2011-01-01
Abstract In systems biology, experimentally measured parameters are not always available, necessitating the use of computationally based parameter estimation. In order to rely on estimated parameters, it is critical to first determine which parameters can be estimated for a given model and measurement set. This is done with parameter identifiability analysis. A kinetic model of the sucrose accumulation in the sugar cane culm tissue developed by Rohwer et al. was taken as a test case model. Wh...
张玉卓
1998-01-01
The quantitative evaluation of errors involved in a particular numerical modelling is of prime importance for the effectiveness and reliability of the method. Errors in Distinct Element Modelling are generated mainly through three resources as simplification of physical model, determination of parameters and boundary conditions. A measure of errors which represent the degree of numerical solution 'close to true value' is proposed through fuzzy probability in this paper. The main objective of this paper is to estimate the reliability of Distinct Element Method in rock engineering practice by varying the parameters and boundary conditions. The accumulation laws of standard errors induced by improper determination of parameters and boundary conditions are discussed in delails. Furthermore, numerical experiments are given to illustrate the estimation of fuzzy reliability. Example shows that fuzzy reliability falls between 75%-98% when the relative standard errors of input data is under 10 %.
Reliability Estimation of the Pultrusion Process Using the First-Order Reliability Method (FORM)
Baran, Ismet; Tutum, Cem C.; Hattel, Jesper H.
2013-08-01
In the present study the reliability estimation of the pultrusion process of a flat plate is analyzed by using the first order reliability method (FORM). The implementation of the numerical process model is validated by comparing the deterministic temperature and cure degree profiles with corresponding analyses in the literature. The centerline degree of cure at the exit (CDOCE) being less than a critical value and the maximum composite temperature ( T max) during the process being greater than a critical temperature are selected as the limit state functions (LSFs) for the FORM. The cumulative distribution functions of the CDOCE and T max as well as the correlation coefficients are obtained by using the FORM and the results are compared with corresponding Monte-Carlo simulations (MCS). According to the results obtained from the FORM, an increase in the pulling speed yields an increase in the probability of T max being greater than the resin degradation temperature. A similar trend is also seen for the probability of the CDOCE being less than 0.8.
Estimation of Kinetic Parameters in an Automotive SCR Catalyst Model
Åberg, Andreas; Widd, Anders; Abildskov, Jens;
2016-01-01
A challenge during the development of models for simulation of the automotive Selective Catalytic Reduction catalyst is the parameter estimation of the kinetic parameters, which can be time consuming and problematic. The parameter estimation is often carried out on small-scale reactor tests, or p...
An approach for parameter estimation of biotechnological processes
Ljubenova, V. (Central Lab. of Bioinstrumentation and Automation, Bulgarian Academy of Sciences, Sofia (Bulgaria)); Ignatova, M.
1994-08-01
An approach for parameter estimators design of biotechnological processes (BTP) is presented in case of lack of real time information about state variables. It is based on general reaction rate models and measurements of at least one reaction rate. A general parameter estimator of BTP is designed with the help of which specific rate estimators are synthesized. Stability and convergence of an estimator of specific growth rate for a class of aerobic batch processes are proved. Its effectiveness is illustrated by simulation results. The proposed on-line parameter estimation approach can be used for design of BTP on-line variable estimation algorithms (variable observers of BTP). (orig.)
Is biomass a reliable estimate of plant fitness?1
Younginger, Brett S.; Sirová, Dagmara; Cruzan, Mitchell B.; Ballhorn, Daniel J.
2017-01-01
The measurement of fitness is critical to biological research. Although the determination of fitness for some organisms may be relatively straightforward under controlled conditions, it is often a difficult or nearly impossible task in nature. Plants are no exception. The potential for long-distance pollen dispersal, likelihood of multiple reproductive events per inflorescence, varying degrees of reproductive growth in perennials, and asexual reproduction all confound accurate fitness measurements. For these reasons, biomass is frequently used as a proxy for plant fitness. However, the suitability of indirect fitness measurements such as plant size is rarely evaluated. This review outlines the important associations between plant performance, fecundity, and fitness. We make a case for the reliability of biomass as an estimate of fitness when comparing conspecifics of the same age class. We reviewed 170 studies on plant fitness and discuss the metrics commonly employed for fitness estimations. We find that biomass or growth rate are frequently used and often positively associated with fecundity, which in turn suggests greater overall fitness. Our results support the utility of biomass as an appropriate surrogate for fitness under many circumstances, and suggest that additional fitness measures should be reported along with biomass or growth rate whenever possible. PMID:28224055
Parameter and Uncertainty Estimation in Groundwater Modelling
Jensen, Jacob Birk
The data basis on which groundwater models are constructed is in general very incomplete, and this leads to uncertainty in model outcome. Groundwater models form the basis for many, often costly decisions and if these are to be made on solid grounds, the uncertainty attached to model results must...... be quantified. This study was motivated by the need to estimate the uncertainty involved in groundwater models.Chapter 2 presents an integrated surface/subsurface unstructured finite difference model that was developed and applied to a synthetic case study.The following two chapters concern calibration...... and uncertainty estimation. Essential issues relating to calibration are discussed. The classical regression methods are described; however, the main focus is on the Generalized Likelihood Uncertainty Estimation (GLUE) methodology. The next two chapters describe case studies in which the GLUE methodology...
METHOD ON ESTIMATION OF DRUG'S PENETRATED PARAMETERS
刘宇红; 曾衍钧; 许景锋; 张梅
2004-01-01
Transdermal drug delivery system (TDDS) is a new method for drug delivery. The analysis of plenty of experiments in vitro can lead to a suitable mathematical model for the description of the process of the drug's penetration through the skin, together with the important parameters that are related to the characters of the drugs.After the research work of the experiments data,a suitable nonlinear regression model was selected. Using this model, the most important parameter-penetrated coefficient of 20 drugs was computed.In the result one can find, this work supports the theory that the skin can be regarded as singular membrane.
A Comparative Study of Distribution System Parameter Estimation Methods
Sun, Yannan; Williams, Tess L.; Gourisetti, Sri Nikhil Gup
2016-07-17
In this paper, we compare two parameter estimation methods for distribution systems: residual sensitivity analysis and state-vector augmentation with a Kalman filter. These two methods were originally proposed for transmission systems, and are still the most commonly used methods for parameter estimation. Distribution systems have much lower measurement redundancy than transmission systems. Therefore, estimating parameters is much more difficult. To increase the robustness of parameter estimation, the two methods are applied with combined measurement snapshots (measurement sets taken at different points in time), so that the redundancy for computing the parameter values is increased. The advantages and disadvantages of both methods are discussed. The results of this paper show that state-vector augmentation is a better approach for parameter estimation in distribution systems. Simulation studies are done on a modified version of IEEE 13-Node Test Feeder with varying levels of measurement noise and non-zero error in the other system model parameters.
Postprocessing MPEG based on estimated quantization parameters
Forchhammer, Søren
2009-01-01
Postprocessing of MPEG(-2) video is widely used to attenuate the coding artifacts, especially deblocking but also deringing have been addressed. The focus has been on filters where the decoder has access to the code stream and e.g. utilizes information about the quantization parameter. We consider...
Estimation of motility parameters from trajectory data
Vestergaard, Christian L.; Pedersen, Jonas Nyvold; Mortensen, Kim I.;
2015-01-01
Given a theoretical model for a self-propelled particle or micro-organism, how does one optimally determine the parameters of the model from experimental data in the form of a time-lapse recorded trajectory? For very long trajectories, one has very good statistics, and optimality may matter little...... to which similar results may be obtained also for self-propelled particles....
Reliability estimation for single-unit ceramic crown restorations.
Lekesiz, H
2014-09-01
The objective of this study was to evaluate the potential of a survival prediction method for the assessment of ceramic dental restorations. For this purpose, fast-fracture and fatigue reliabilities for 2 bilayer (metal ceramic alloy core veneered with fluorapatite leucite glass-ceramic, d.Sign/d.Sign-67, by Ivoclar; glass-infiltrated alumina core veneered with feldspathic porcelain, VM7/In-Ceram Alumina, by Vita) and 3 monolithic (leucite-reinforced glass-ceramic, Empress, and ProCAD, by Ivoclar; lithium-disilicate glass-ceramic, Empress 2, by Ivoclar) single posterior crown restorations were predicted, and fatigue predictions were compared with the long-term clinical data presented in the literature. Both perfectly bonded and completely debonded cases were analyzed for evaluation of the influence of the adhesive/restoration bonding quality on estimations. Material constants and stress distributions required for predictions were calculated from biaxial tests and finite element analysis, respectively. Based on the predictions, In-Ceram Alumina presents the best fast-fracture resistance, and ProCAD presents a comparable resistance for perfect bonding; however, ProCAD shows a significant reduction of resistance in case of complete debonding. Nevertheless, it is still better than Empress and comparable with Empress 2. In-Ceram Alumina and d.Sign have the highest long-term reliability, with almost 100% survivability even after 10 years. When compared with clinical failure rates reported in the literature, predictions show a promising match with clinical data, and this indicates the soundness of the settings used in the proposed predictions. © International & American Associations for Dental Research.
minimum variance estimation of yield parameters of rubber tree with ...
2013-03-01
Mar 1, 2013 ... STAMP, an OxMetric modular software system for time series analysis, was used to estimate the yield ... derlying regression techniques. .... Kalman Filter Minimum Variance Estimation of Rubber Tree Yield Parameters. 83.
Miller, Brandon; Littenberg, Tyson B; Farr, Ben
2015-01-01
Reliable low-latency gravitational wave parameter estimation is essential to target limited electromagnetic followup facilities toward astrophysically interesting and electromagnetically relevant sources of gravitational waves. In this study, we examine the tradeoff between speed and accuracy. Specifically, we estimate the astrophysical relevance of systematic errors in the posterior parameter distributions derived using a fast-but-approximate waveform model, SpinTaylorF2 (STF2), in parameter estimation with lalinference_mcmc. Though efficient, the STF2 approximation to compact binary inspiral employs approximate kinematics (e.g., a single spin) and an approximate waveform (e.g., frequency domain versus time domain). More broadly, using a large astrophysically-motivated population of generic compact binary merger signals, we report on the effectualness and limitations of this single-spin approximation as a method to infer parameters of generic compact binary sources. For most low-mass compact binary sources, ...
Parameter estimation and error analysis in environmental modeling and computation
Kalmaz, E. E.
1986-01-01
A method for the estimation of parameters and error analysis in the development of nonlinear modeling for environmental impact assessment studies is presented. The modular computer program can interactively fit different nonlinear models to the same set of data, dynamically changing the error structure associated with observed values. Parameter estimation techniques and sequential estimation algorithms employed in parameter identification and model selection are first discussed. Then, least-square parameter estimation procedures are formulated, utilizing differential or integrated equations, and are used to define a model for association of error with experimentally observed data.
A Modified Extended Bayesian Method for Parameter Estimation
无
2007-01-01
This paper presents a modified extended Bayesian method for parameter estimation. In this method the mean value of the a priori estimation is taken from the values of the estimated parameters in the previous iteration step. In this way, the parameter covariance matrix can be automatically updated during the estimation procedure, thereby avoiding the selection of an empirical parameter. Because the extended Bayesian method can be regarded as a Tikhonov regularization, this new method is more stable than both the least-squares method and the maximum likelihood method. The validity of the proposed method is illustrated by two examples: one based on simulated data and one based on real engineering data.
Parameter estimation using compensatory neural networks
M Sinha; P K Kalra; K Kumar
2000-04-01
Proposed here is a new neuron model, a basis for Compensatory Neural Network Architecture (CNNA), which not only reduces the total number of interconnections among neurons but also reduces the total computing time for training. The suggested model has properties of the basic neuron model as well as the higher neuron model (multiplicative aggregation function). It can adapt to standard neuron and higher order neuron, as well as a combination of the two. This approach is found to estimate the orbit with accuracy significantly better than Kalman Filter (KF) and Feedforward Multilayer Neural Network (FMNN) (also simply referred to as Artificial Neural Network, ANN) with lambda-gamma learning. The typical simulation runs also bring out the superiority of the proposed scheme over Kalman filter from the standpoint of computation time and the amount of data needed for the desired degree of estimated accuracy for the specific problem of orbit determination.
Muscle parameters estimation based on biplanar radiography.
Dubois, G; Rouch, P; Bonneau, D; Gennisson, J L; Skalli, W
2016-11-01
The evaluation of muscle and joint forces in vivo is still a challenge. Musculo-Skeletal (musculo-skeletal) models are used to compute forces based on movement analysis. Most of them are built from a scaled-generic model based on cadaver measurements, which provides a low level of personalization, or from Magnetic Resonance Images, which provide a personalized model in lying position. This study proposed an original two steps method to access a subject-specific musculo-skeletal model in 30 min, which is based solely on biplanar X-Rays. First, the subject-specific 3D geometry of bones and skin envelopes were reconstructed from biplanar X-Rays radiography. Then, 2200 corresponding control points were identified between a reference model and the subject-specific X-Rays model. Finally, the shape of 21 lower limb muscles was estimated using a non-linear transformation between the control points in order to fit the muscle shape of the reference model to the X-Rays model. Twelfth musculo-skeletal models were reconstructed and compared to their reference. The muscle volume was not accurately estimated with a standard deviation (SD) ranging from 10 to 68%. However, this method provided an accurate estimation the muscle line of action with a SD of the length difference lower than 2% and a positioning error lower than 20 mm. The moment arm was also well estimated with SD lower than 15% for most muscle, which was significantly better than scaled-generic model for most muscle. This method open the way to a quick modeling method for gait analysis based on biplanar radiography.
Bias in parameter estimation of form errors
Zhang, Xiangchao; Zhang, Hao; He, Xiaoying; Xu, Min
2014-09-01
The surface form qualities of precision components are critical to their functionalities. In precision instruments algebraic fitting is usually adopted and the form deviations are assessed in the z direction only, in which case the deviations at steep regions of curved surfaces will be over-weighted, making the fitted results biased and unstable. In this paper the orthogonal distance fitting is performed for curved surfaces and the form errors are measured along the normal vectors of the fitted ideal surfaces. The relative bias of the form error parameters between the vertical assessment and orthogonal assessment are analytically calculated and it is represented as functions of the surface slopes. The parameter bias caused by the non-uniformity of data points can be corrected by weighting, i.e. each data is weighted by the 3D area of the Voronoi cell around the projection point on the fitted surface. Finally numerical experiments are given to compare different fitting methods and definitions of the form error parameters. The proposed definition is demonstrated to show great superiority in terms of stability and unbiasedness.
Parameter Estimations for Signal Type Classification of Korean Disordered Voices
JiYeoun Lee
2015-12-01
Full Text Available Although many signal-typing studies have been published, they are primarily based on manual inspection and experts’ judgments of voice samples’ acoustic content. Software may be required to automatically and objectively classify pathological voices into the four signal types and to facilitate experts’ opinion formation by providing specific signal type determination criteria. This paper suggests the coefficient of normalized skewness variation (CSV, coefficient of normalized kurtosis variation (CKV, and bicoherence value (BV based on the linear predictive coding (LPC residual to categorize voice signals. Its objective is to improve the performances of acoustic parameters such as jitter, shimmer, and the signal-to-noise ratio (SNR in signal type classification. In this study, the classification and regression tree (CART was used to estimate the performances of the acoustic, CSV, CKV, and BV parameters by using the LPC residual. In the investigation of acoustic parameters such as jitter, shimmer, and the SNR, the optimal tree generated by jitter alone yielded an average accuracy of 78.6%. When the acoustic, CSV, CKV, and BV parameters together were used to generate the decision tree, the average accuracy was 82.1%. In this case, the optimal tree formed by jitter and the BV effectively discriminated between the signal types. To perform accurate acoustic pathological voice analysis, signal type quantification is of great interest. Automatic pathological voice classification can be an important objective tool as the signal type can be numerically measured. Future investigations will incorporate multiple pathological data in classification methods to improve their performance and implement more reliable detectors.
Parameter Estimation in Ultrasonic Measurements on Trabecular Bone
Marutyan, Karen R.; Anderson, Christian C.; Wear, Keith A.; Holland, Mark R.; Miller, James G.; Bretthorst, G. Larry
2007-11-01
Ultrasonic tissue characterization has shown promise for clinical diagnosis of diseased bone (e.g., osteoporosis) by establishing correlations between bone ultrasonic characteristics and the state of disease. Porous (trabecular) bone supports propagation of two compressional modes, a fast wave and a slow wave, each of which is characterized by an approximately linear-with-frequency attenuation coefficient and monotonically increasing with frequency phase velocity. Only a single wave, however, is generally apparent in the received signals. The ultrasonic parameters that govern propagation of this single wave appear to be causally inconsistent [1]. Specifically, the attenuation coefficient rises approximately linearly with frequency, but the phase velocity exhibits a decrease with frequency. These inconsistent results are obtained when the data are analyzed under the assumption that the received signal is composed of one wave. The inconsistency disappears if the data are analyzed under the assumption that the signal is composed of superposed fast and slow waves. In the current investigation, Bayesian probability theory is applied to estimate the ultrasonic characteristics underlying the propagation of the fast and slow wave from computer simulations. Our motivation is the assumption that identifying the intrinsic material properties of bone will provide more reliable estimates of bone quality and fracture risk than the apparent properties derived by analyzing the data using a one-mode model.
Reliability Estimations of Control Systems Effected by Several Interference Sources
DengBei-xing; JiangMing-hu; LiXing
2003-01-01
In order to establish the sufficient and necessary condition that arbitrarily reliable systems can not be constructed with function elements under interference sources, it is very important to expand set of interference sources with the above property. In this paper, the models of two types of interference sources are raised respectively: interference source possessing real input vectors and constant reliable interferen cesource. We study the reliability of the systems effected by the interference sources, and the lower bound of the reliability is presented. The results show that it is impossible that arbitrarily reliable systems can not be constructed with the elements effected by above interference sources.
Parameter Estimation for Improving Association Indicators in Binary Logistic Regression
Mahdi Bashiri
2012-02-01
Full Text Available The aim of this paper is estimation of Binary logistic regression parameters for maximizing the log-likelihood function with improved association indicators. In this paper the parameter estimation steps have been explained and then measures of association have been introduced and their calculations have been analyzed. Moreover a new related indicators based on membership degree level have been expressed. Indeed association measures demonstrate the number of success responses occurred in front of failure in certain number of Bernoulli independent experiments. In parameter estimation, existing indicators values is not sensitive to the parameter values, whereas the proposed indicators are sensitive to the estimated parameters during the iterative procedure. Therefore, proposing a new association indicator of binary logistic regression with more sensitivity to the estimated parameters in maximizing the log- likelihood in iterative procedure is innovation of this study.
Estimation of motility parameters from trajectory data
Vestergaard, Christian L.; Pedersen, Jonas Nyvold; Mortensen, Kim I.;
2015-01-01
Given a theoretical model for a self-propelled particle or micro-organism, how does one optimally determine the parameters of the model from experimental data in the form of a time-lapse recorded trajectory? For very long trajectories, one has very good statistics, and optimality may matter little....... However, for biological micro-organisms, one may not control the duration of recordings, and then optimality can matter. This is especially the case if one is interested in individuality and hence cannot improve statistics by taking population averages over many trajectories. One can learn much about...
Control and Estimation of Distributed Parameter Systems
Kappel, F; Kunisch, K
1998-01-01
Consisting of 23 refereed contributions, this volume offers a broad and diverse view of current research in control and estimation of partial differential equations. Topics addressed include, but are not limited to - control and stability of hyperbolic systems related to elasticity, linear and nonlinear; - control and identification of nonlinear parabolic systems; - exact and approximate controllability, and observability; - Pontryagin's maximum principle and dynamic programming in PDE; and - numerics pertinent to optimal and suboptimal control problems. This volume is primarily geared toward control theorists seeking information on the latest developments in their area of expertise. It may also serve as a stimulating reader to any researcher who wants to gain an impression of activities at the forefront of a vigorously expanding area in applied mathematics.
Two-state filtering for joint state-parameter estimation
Santitissadeekorn, Naratip
2014-01-01
This paper presents an approach for simultaneous estimation of the state and unknown parameters in a sequential data assimilation framework. The state augmentation technique, in which the state vector is augmented by the model parameters, has been investigated in many previous studies and some success with this technique has been reported in the case where model parameters are additive. However, many geophysical or climate models contains non-additive parameters such as those arising from physical parametrization of sub-grid scale processes, in which case the state augmentation technique may become ineffective since its inference about parameters from partially observed states based on the cross covariance between states and parameters is inadequate if states and parameters are not linearly correlated. In this paper, we propose a two-stages filtering technique that runs particle filtering (PF) to estimate parameters while updating the state estimate using Ensemble Kalman filter (ENKF; these two "sub-filters" ...
Estimating parameters for generalized mass action models with connectivity information
Voit Eberhard O
2009-05-01
Full Text Available Abstract Background Determining the parameters of a mathematical model from quantitative measurements is the main bottleneck of modelling biological systems. Parameter values can be estimated from steady-state data or from dynamic data. The nature of suitable data for these two types of estimation is rather different. For instance, estimations of parameter values in pathway models, such as kinetic orders, rate constants, flux control coefficients or elasticities, from steady-state data are generally based on experiments that measure how a biochemical system responds to small perturbations around the steady state. In contrast, parameter estimation from dynamic data requires time series measurements for all dependent variables. Almost no literature has so far discussed the combined use of both steady-state and transient data for estimating parameter values of biochemical systems. Results In this study we introduce a constrained optimization method for estimating parameter values of biochemical pathway models using steady-state information and transient measurements. The constraints are derived from the flux connectivity relationships of the system at the steady state. Two case studies demonstrate the estimation results with and without flux connectivity constraints. The unconstrained optimal estimates from dynamic data may fit the experiments well, but they do not necessarily maintain the connectivity relationships. As a consequence, individual fluxes may be misrepresented, which may cause problems in later extrapolations. By contrast, the constrained estimation accounting for flux connectivity information reduces this misrepresentation and thereby yields improved model parameters. Conclusion The method combines transient metabolic profiles and steady-state information and leads to the formulation of an inverse parameter estimation task as a constrained optimization problem. Parameter estimation and model selection are simultaneously carried out
Shape parameter estimate for a glottal model without time position
Degottex, Gilles; Roebel, Axel; Rodet, Xavier
2009-01-01
cote interne IRCAM: Degottex09a; None / None; National audience; From a recorded speech signal, we propose to estimate a shape parameter of a glottal model without estimating his time position. Indeed, the literature usually propose to estimate the time position first (ex. by detecting Glottal Closure Instants). The vocal-tract filter estimate is expressed as a minimum-phase envelope estimation after removing the glottal model and a standard lips radiation model. Since this filter is mainly b...
Statistical methods for cosmological parameter selection and estimation
Liddle, Andrew R
2009-01-01
The estimation of cosmological parameters from precision observables is an important industry with crucial ramifications for particle physics. This article discusses the statistical methods presently used in cosmological data analysis, highlighting the main assumptions and uncertainties. The topics covered are parameter estimation, model selection, multi-model inference, and experimental design, all primarily from a Bayesian perspective.
Estimating a weighted average of stratum-specific parameters.
Brumback, Babette A; Winner, Larry H; Casella, George; Ghosh, Malay; Hall, Allyson; Zhang, Jianyi; Chorba, Lorna; Duncan, Paul
2008-10-30
This article investigates estimators of a weighted average of stratum-specific univariate parameters and compares them in terms of a design-based estimate of mean-squared error (MSE). The research is motivated by a stratified survey sample of Florida Medicaid beneficiaries, in which the parameters are population stratum means and the weights are known and determined by the population sampling frame. Assuming heterogeneous parameters, it is common to estimate the weighted average with the weighted sum of sample stratum means; under homogeneity, one ignores the known weights in favor of precision weighting. Adaptive estimators arise from random effects models for the parameters. We propose adaptive estimators motivated from these random effects models, but we compare their design-based performance. We further propose selecting the tuning parameter to minimize a design-based estimate of mean-squared error. This differs from the model-based approach of selecting the tuning parameter to accurately represent the heterogeneity of stratum means. Our design-based approach effectively downweights strata with small weights in the assessment of homogeneity, which can lead to a smaller MSE. We compare the standard random effects model with identically distributed parameters to a novel alternative, which models the variances of the parameters as inversely proportional to the known weights. We also present theoretical and computational details for estimators based on a general class of random effects models. The methods are applied to estimate average satisfaction with health plan and care among Florida beneficiaries just prior to Medicaid reform.
On Estimating the Parameters of Truncated Trivariate Normal Distributions
M. N. Bhattacharyya
1969-07-01
Full Text Available Maximum likehood estimates of the parameters of a trivariate normal distribution, with single truncation on two-variates, have been derived in this paper. The information matrix has also been given from which the asymptotic variances and covariances might be obtained for the estimates of the parameters of the restricted variables. Numerical examples have been worked out.
Parameter estimation of hydrologic models using data assimilation
Kaheil, Y. H.
2005-12-01
The uncertainties associated with the modeling of hydrologic systems sometimes demand that data should be incorporated in an on-line fashion in order to understand the behavior of the system. This paper represents a Bayesian strategy to estimate parameters for hydrologic models in an iterative mode. The paper presents a modified technique called localized Bayesian recursive estimation (LoBaRE) that efficiently identifies the optimum parameter region, avoiding convergence to a single best parameter set. The LoBaRE methodology is tested for parameter estimation for two different types of models: a support vector machine (SVM) model for predicting soil moisture, and the Sacramento Soil Moisture Accounting (SAC-SMA) model for estimating streamflow. The SAC-SMA model has 13 parameters that must be determined. The SVM model has three parameters. Bayesian inference is used to estimate the best parameter set in an iterative fashion. This is done by narrowing the sampling space by imposing uncertainty bounds on the posterior best parameter set and/or updating the "parent" bounds based on their fitness. The new approach results in fast convergence towards the optimal parameter set using minimum training/calibration data and evaluation of fewer parameter sets. The efficacy of the localized methodology is also compared with the previously used Bayesian recursive estimation (BaRE) algorithm.
Bayesian Parameter Estimation for Heavy-Duty Vehicles
Miller, Eric; Konan, Arnaud; Duran, Adam
2017-03-28
Accurate vehicle parameters are valuable for design, modeling, and reporting. Estimating vehicle parameters can be a very time-consuming process requiring tightly-controlled experimentation. This work describes a method to estimate vehicle parameters such as mass, coefficient of drag/frontal area, and rolling resistance using data logged during standard vehicle operation. The method uses Monte Carlo to generate parameter sets which is fed to a variant of the road load equation. Modeled road load is then compared to measured load to evaluate the probability of the parameter set. Acceptance of a proposed parameter set is determined using the probability ratio to the current state, so that the chain history will give a distribution of parameter sets. Compared to a single value, a distribution of possible values provides information on the quality of estimates and the range of possible parameter values. The method is demonstrated by estimating dynamometer parameters. Results confirm the method's ability to estimate reasonable parameter sets, and indicates an opportunity to increase the certainty of estimates through careful selection or generation of the test drive cycle.
Parameter and State Estimator for State Space Models
Ruifeng Ding
2014-01-01
Full Text Available This paper proposes a parameter and state estimator for canonical state space systems from measured input-output data. The key is to solve the system state from the state equation and to substitute it into the output equation, eliminating the state variables, and the resulting equation contains only the system inputs and outputs, and to derive a least squares parameter identification algorithm. Furthermore, the system states are computed from the estimated parameters and the input-output data. Convergence analysis using the martingale convergence theorem indicates that the parameter estimates converge to their true values. Finally, an illustrative example is provided to show that the proposed algorithm is effective.
Parameter and state estimator for state space models.
Ding, Ruifeng; Zhuang, Linfan
2014-01-01
This paper proposes a parameter and state estimator for canonical state space systems from measured input-output data. The key is to solve the system state from the state equation and to substitute it into the output equation, eliminating the state variables, and the resulting equation contains only the system inputs and outputs, and to derive a least squares parameter identification algorithm. Furthermore, the system states are computed from the estimated parameters and the input-output data. Convergence analysis using the martingale convergence theorem indicates that the parameter estimates converge to their true values. Finally, an illustrative example is provided to show that the proposed algorithm is effective.
Estimation of ground water hydraulic parameters
Hvilshoej, Soeren
1998-11-01
The main objective was to assess field methods to determine ground water hydraulic parameters and to develop and apply new analysis methods to selected field techniques. A field site in Vejen, Denmark, which previously has been intensively investigated on the basis of a large amount of mini slug tests and tracer tests, was chosen for experimental application and evaluation. Particular interest was in analysing partially penetrating pumping tests and a recently proposed single-well dipole test. Three wells were constructed in which partially penetrating pumping tests and multi-level single-well dipole tests were performed. In addition, multi-level slug tests, flow meter tests, gamma-logs, and geologic characterisation of soil samples were carried out. In addition to the three Vejen analyses, data from previously published partially penetrating pumping tests were analysed assuming homogeneous anisotropic aquifer conditions. In the present study methods were developed to analyse partially penetrating pumping tests and multi-level single-well dipole tests based on an inverse numerical model. The obtained horizontal hydraulic conductivities from the partially penetrating pumping tests were in accordance with measurements obtained from multi-level slug tests and mini slug tests. Accordance was also achieved between the anisotropy ratios determined from partially penetrating pumping tests and multi-level single-well dipole tests. It was demonstrated that the partially penetrating pumping test analysed by and inverse numerical model is a very valuable technique that may provide hydraulic information on the storage terms and the vertical distribution of the horizontal and vertical hydraulic conductivity under both confined and unconfined aquifer conditions. (EG) 138 refs.
Dieën, J.H. van; Koppes, L.L.J.; Twisk, J.W.R.
2010-01-01
This study investigated a representative set of 39 parameters characterizing center of pressure movements (sway) in seated balancing, with the aims to determine test-retest reliability, to clarify the interrelations between these parameters, and to determine which parameters were related to balance
Fang-Rong Yan
2014-01-01
Full Text Available Population pharmacokinetic (PPK models play a pivotal role in quantitative pharmacology study, which are classically analyzed by nonlinear mixed-effects models based on ordinary differential equations. This paper describes the implementation of SDEs in population pharmacokinetic models, where parameters are estimated by a novel approximation of likelihood function. This approximation is constructed by combining the MCMC method used in nonlinear mixed-effects modeling with the extended Kalman filter used in SDE models. The analysis and simulation results show that the performance of the approximation of likelihood function for mixed-effects SDEs model and analysis of population pharmacokinetic data is reliable. The results suggest that the proposed method is feasible for the analysis of population pharmacokinetic data.
An Algorithm for Positive Definite Least Square Estimation of Parameters.
1986-05-01
This document presents an algorithm for positive definite least square estimation of parameters. This estimation problem arises from the PILOT...dynamic macro-economic model and is equivalent to an infinite convex quadratic program. It differs from ordinary least square estimations in that the
PARAMETER ESTIMATION IN LINEAR REGRESSION MODELS FOR LONGITUDINAL CONTAMINATED DATA
QianWeimin; LiYumei
2005-01-01
The parameter estimation and the coefficient of contamination for the regression models with repeated measures are studied when its response variables are contaminated by another random variable sequence. Under the suitable conditions it is proved that the estimators which are established in the paper are strongly consistent estimators.
Reliability Estimations of Control Systems Effected by Several Interference Sources
Deng Bei-xing; Jiang Ming-hu; Li Xing
2003-01-01
In order to estab lish the sufficient and necessary condition that arbitrarily reliable systems can not be construc-ted with function elements under interference sources, it is very important to expand set of interference sources with the above property. In this paper, the models of two types of in-terference sources are raised respectively: interference source possessing real input vectors and constant reliable interference source. We study the reliability of the systems effected by the interference sources, and the lower bound of the reliability is presented. The results show that it is impossible that arbi-trarily reliable systems can not be constructed with the ele-ments effected by above interference sources.
Analytic Estimation of Standard Error and Confidence Interval for Scale Reliability.
Raykov, Tenko
2002-01-01
Proposes an analytic approach to standard error and confidence interval estimation of scale reliability with fixed congeneric measures. The method is based on a generally applicable estimator stability evaluation procedure, the delta method. The approach, which combines wide-spread point estimation of composite reliability in behavioral scale…
A particle swarm model for estimating reliability and scheduling system maintenance
Puzis, Rami; Shirtz, Dov; Elovici, Yuval
2016-05-01
Modifying data and information system components may introduce new errors and deteriorate the reliability of the system. Reliability can be efficiently regained with reliability centred maintenance, which requires reliability estimation for maintenance scheduling. A variant of the particle swarm model is used to estimate reliability of systems implemented according to the model view controller paradigm. Simulations based on data collected from an online system of a large financial institute are used to compare three component-level maintenance policies. Results show that appropriately scheduled component-level maintenance greatly reduces the cost of upholding an acceptable level of reliability by reducing the need in system-wide maintenance.
Early Stage Software Reliability Estimation with Stochastic Reward Nets
ZHAO Jing; LIU Hong-wei; CUI Gang; YANG Xiao-zong
2005-01-01
This paper presents software reliability modeling issues at the early stage of a software development for fault tolerant software management system. Based on Stochastic Reward Nets, an effective model of hierarchical view for a fault tolerant software management system is put forward, and an approach that consists of system transient performance analysis is adopted. A quantitative approach for software reliability analysis is given. The results show its usefulness for the design and evaluation of the early-stage software reliability modeling when failure data is not available.
The open-source, public domain JUPITER (Joint Universal Parameter IdenTification and Evaluation of Reliability) API (Application Programming Interface) provides conventions and Fortran-90 modules to develop applications (computer programs) for analyzing process models. The input ...
Research on the estimation method for Earth rotation parameters
Yao, Yibin
2008-12-01
In this paper, the methods of earth rotation parameter (ERP) estimation based on IGS SINEX file of GPS solution are discussed in details. To estimate ERP, two different ways are involved: one is the parameter transformation method, and the other is direct adjustment method with restrictive conditions. With the IGS daily SINEX files produced by GPS tracking stations can be used to estimate ERP. The parameter transformation method can simplify the process. The process result indicates that the systemic error will exist in the estimated ERP by only using GPS observations. As to the daily GPS SINEX files, why the distinct systemic error is exist in the ERP, or whether this systemic error will affect other parameters estimation, and what its influenced magnitude being, it needs further study in the future.
Parameter Estimation for Generalized Brownian Motion with Autoregressive Increments
Fendick, Kerry
2011-01-01
This paper develops methods for estimating parameters for a generalization of Brownian motion with autoregressive increments called a Brownian ray with drift. We show that a superposition of Brownian rays with drift depends on three types of parameters - a drift coefficient, autoregressive coefficients, and volatility matrix elements, and we introduce methods for estimating each of these types of parameters using multidimensional times series data. We also cover parameter estimation in the contexts of two applications of Brownian rays in the financial sphere: queuing analysis and option valuation. For queuing analysis, we show how samples of queue lengths can be used to estimate the conditional expectation functions for the length of the queue and for increments in its net input and lost potential output. For option valuation, we show how the Black-Scholes-Merton formula depends on the price of the security on which the option is written through estimates not only of its volatility, but also of a coefficient ...
Modeling and Parameter Estimation of a Small Wind Generation System
Carlos A. Ramírez Gómez
2013-11-01
Full Text Available The modeling and parameter estimation of a small wind generation system is presented in this paper. The system consists of a wind turbine, a permanent magnet synchronous generator, a three phase rectifier, and a direct current load. In order to estimate the parameters wind speed data are registered in a weather station located in the Fraternidad Campus at ITM. Wind speed data were applied to a reference model programed with PSIM software. From that simulation, variables were registered to estimate the parameters. The wind generation system model together with the estimated parameters is an excellent representation of the detailed model, but the estimated model offers a higher flexibility than the programed model in PSIM software.
Parameter estimation of the Huxley cross-bridge muscle model in humans.
Vardy, Alistair N; de Vlugt, Erwin; van der Helm, Frans C T
2012-01-01
The Huxley model has the potential to provide more accurate muscle dynamics while affording a physiological interpretation at cross-bridge level. By perturbing the wrist at different velocities and initial force levels, reliable Huxley model parameters were estimated in humans in vivo using a Huxley muscle-tendon complex. We conclude that these estimates may be used to investigate and monitor changes in microscopic elements of muscle functioning from experiments at joint level.
Catchment tomography - An approach for spatial parameter estimation
Baatz, D.; Kurtz, W.; Hendricks Franssen, H. J.; Vereecken, H.; Kollet, S. J.
2017-09-01
The use of distributed-physically based hydrological models is often hampered by the lack of information on key parameters and their spatial distribution and temporal dynamics. Typically, the estimation of parameter values is impeded by the lack of sufficient observations leading to mathematically underdetermined estimation problems and thus non-uniqueness. Catchment tomography (CT) presents a method to estimate spatially distributed model parameters by resolving the integrated signal of stream runoff in response to precipitation. Basically CT exploits the information content generated by a distributed precipitation signal both in time and space. In a moving transmitter-receiver concept, high resolution, radar based precipitation data are applied with a distributed surface runoff model. Synthetic stream water level observations, serving as receivers, are assimilated with an Ensemble Kalman Filter. With a joint state-parameter update the spatially distributed Manning's roughness coefficient, n, is estimated using the coupled Terrestrial Systems Modelling Platform and the Parallel Data Assimilation Framework (TerrSysMP-PDAF). The sequential data assimilation in combination with the distributed precipitation continuously integrates new information into the model, thus, increasingly constraining the parameter space. With this large amount of data included for the parameter estimation, CT reduces the problem of underdetermined model parameters. The initially biased Manning's coefficients spatially distributed in two and four fixed parameter zones are estimated with errors of less than 3% and 17%, respectively, with only 64 model realizations. It is shown that the distributed precipitation is of major importance for this approach.
Determining Reliability Parameters for a Closed-Cycle Small Combined Heat and Power Plant
Vysokomorny Vladimir S.
2016-01-01
Full Text Available The paper provides numerical values of the reliability parameters for independent power sources within the ambient temperature and output power range corresponding to the operation under the climatic conditions of Eastern Siberia and the Far East of the Russian Federation. We have determined the optimal values of the parameters necessary for the reliable operation of small CHP plants (combined heat and power plants providing electricity for isolated facilities.
Parameter Estimation in Epidemiology: from Simple to Complex Dynamics
Aguiar, Maíra; Ballesteros, Sebastién; Boto, João Pedro; Kooi, Bob W.; Mateus, Luís; Stollenwerk, Nico
2011-09-01
We revisit the parameter estimation framework for population biological dynamical systems, and apply it to calibrate various models in epidemiology with empirical time series, namely influenza and dengue fever. When it comes to more complex models like multi-strain dynamics to describe the virus-host interaction in dengue fever, even most recently developed parameter estimation techniques, like maximum likelihood iterated filtering, come to their computational limits. However, the first results of parameter estimation with data on dengue fever from Thailand indicate a subtle interplay between stochasticity and deterministic skeleton. The deterministic system on its own already displays complex dynamics up to deterministic chaos and coexistence of multiple attractors.
A simulation of water pollution model parameter estimation
Kibler, J. F.
1976-01-01
A parameter estimation procedure for a water pollution transport model is elaborated. A two-dimensional instantaneous-release shear-diffusion model serves as representative of a simple transport process. Pollution concentration levels are arrived at via modeling of a remote-sensing system. The remote-sensed data are simulated by adding Gaussian noise to the concentration level values generated via the transport model. Model parameters are estimated from the simulated data using a least-squares batch processor. Resolution, sensor array size, and number and location of sensor readings can be found from the accuracies of the parameter estimates.
UPDATING AND DOWNDATING FOR PARAMETER ESTIMATION WITH BOUNDED UNCERTAIN DATA
无
2005-01-01
The bounded parameter estimation problem and its solution lead to moie meaningful results. Its superior performance is due to the fact that the new method guarantees that the effect of the uncertainties will never be unnecessarily overestimated.We then consider how to update and downdate the bounded parameter estimation problem. When updating and downdating of SVD are used to the new problem, special technologies are taken to avoid forming U and V explicitly, then increase the algorithm performance. Because of the link between the bounded parameter estimation and Tikhonov regularization procedure, we point out that our algorithms can also be used to modify regularization problem.
Denise Güthlin
Full Text Available Due to time and financial constraints indices are often used to obtain landscape-scale estimates of relative species abundance. Using two different field methods and comparing the results can help to detect possible bias or a non monotonic relationship between the index and the true abundance, providing more reliable results. We used data obtained from camera traps and feces counts to independently estimate relative abundance of red foxes in the Black Forest, a forested landscape in southern Germany. Applying negative binomial regression models, we identified landscape parameters that influence red fox abundance, which we then used to predict relative red fox abundance. We compared the estimated regression coefficients of the landscape parameters and the predicted abundance of the two methods. Further, we compared the costs and the precision of the two field methods. The predicted relative abundances were similar between the two methods, suggesting that the two indices were closely related to the true abundance of red foxes. For both methods, landscape diversity and edge density best described differences in the indices and had positive estimated effects on the relative fox abundance. In our study the costs of each method were of similar magnitude, but the sample size obtained from the feces counts (262 transects was larger than the camera trap sample size (88 camera locations. The precision of the camera traps was lower than the precision of the feces counts. The approach we applied can be used as a framework to compare and combine the results of two or more different field methods to estimate abundance and by this enhance the reliability of the result.
A new estimate of the parameters in linear mixed models
王松桂; 尹素菊
2002-01-01
In linear mixed models, there are two kinds of unknown parameters: one is the fixed effect, theother is the variance component. In this paper, new estimates of these parameters, called the spectral decom-position estimates, are proposed, Some important statistical properties of the new estimates are established,in particular the linearity of the estimates of the fixed effects with many statistical optimalities. A new methodis applied to two important models which are used in economics, finance, and mechanical fields. All estimatesobtained have good statistical and practical meaning.
Improving Sample Estimate Reliability and Validity with Linked Ego Networks
Lu, Xin
2012-01-01
Respondent-driven sampling (RDS) is currently widely used in public health, especially for the study of hard-to-access populations such as injecting drug users and men who have sex with men. The method works like a snowball sample but can, given that some assumptions are met, generate unbiased population estimates. However, recent studies have shown that traditional RDS estimators are likely to generate large variance and estimate error. To improve the performance of traditional estimators, we propose a method to generate estimates with ego network data collected by RDS. By simulating RDS processes on an empirical human social network with known population characteristics, we have shown that the precision of estimates on the composition of network link types is greatly improved with ego network data. The proposed estimator for population characteristics shows superior advantage over traditional RDS estimators, and most importantly, the new method exhibits strong robustness to the recruitment preference of res...
Parameter estimation and determinability analysis applied to Drosophila gap gene circuits
Jaeger Johannes
2008-09-01
Full Text Available Abstract Background Mathematical modeling of real-life processes often requires the estimation of unknown parameters. Once the parameters are found by means of optimization, it is important to assess the quality of the parameter estimates, especially if parameter values are used to draw biological conclusions from the model. Results In this paper we describe how the quality of parameter estimates can be analyzed. We apply our methodology to assess parameter determinability for gene circuit models of the gap gene network in early Drosophila embryos. Conclusion Our analysis shows that none of the parameters of the considered model can be determined individually with reasonable accuracy due to correlations between parameters. Therefore, the model cannot be used as a tool to infer quantitative regulatory weights. On the other hand, our results show that it is still possible to draw reliable qualitative conclusions on the regulatory topology of the gene network. Moreover, it improves previous analyses of the same model by allowing us to identify those interactions for which qualitative conclusions are reliable, and those for which they are ambiguous.
On The Estimation of Survival Function and Parameter Exponential Life Time Distribution
Hadeel S. Al-Kutubi
2009-01-01
Full Text Available Problem statement: The study and research of survival or reliability or life time belong to the same area of study but they may belong to a different area of application. In survival analysis one can use several life time distribution, exponential distribution with mean life time θ is one of them. To estimate this parameter and survival function we must be used estimation procedures with less MSE and MPE. Approach: The only statistical theory that combined modeling inherent uncertainty and statistical uncertainty is Bayesian statistics. The theorem of Bayes provided a solution to how learn from data. Bayes theorem was depending on prior and posterior distribution and standard Bayes estimator depends on Jeffery prior information. In this study we annexed Jeffery prior information to get the modify Bayes estimator and then compared it with standard Bayes estimator and maximum likelihood estimator to find the best (less MSE and MPE. Results: when we derived Bayesian and Maximum likelihood of the scale parameter and survival functions. Simulation study was used to compare between estimators and Mean Square Error (MSE and Mean Percentage Error (MPE of estimators are computed. Conclusion: The new proposed estimator of modify Bayes estimator in parameter and survival function was the best estimator (less MSE and MPE when we compared it with standard Bayes and maximum likelihood estimator.
Parameter Estimation for the Thurstone Case III Model.
Mackay, David B.; Chaiy, Seoil
1982-01-01
The ability of three estimation criteria to recover parameters of the Thurstone Case V and Case III models from comparative judgment data was investigated via Monte Carlo techniques. Significant differences in recovery are shown to exist. (Author/JKS)
Another Look at the EWMA Control Chart with Estimated Parameters
Saleh, N.A.; Mahmoud, M.A.; Jones-Farmer, L.A.; Zwetsloot, I.; Woodall, W.H.
2015-01-01
The authors assess the in-control performance of the exponentially weighted moving average (EWMA) control chart in terms of the SDARL and percentiles of the ARL distribution when the process parameters are estimated.
Spanning Trees and bootstrap reliability estimation in correlation based networks
Tumminello, M; Lillo, F; Micciché, S; Mantegna, R N
2006-01-01
We introduce a new technique to associate a spanning tree to the average linkage cluster analysis. We term this tree as the Average Linkage Minimum Spanning Tree. We also introduce a technique to associate a value of reliability to links of correlation based graphs by using bootstrap replicas of data. Both techniques are applied to the portfolio of the 300 most capitalized stocks traded at New York Stock Exchange during the time period 2001-2003. We show that the Average Linkage Minimum Spanning Tree recognizes economic sectors and sub-sectors as communities in the network slightly better than the Minimum Spanning Tree does. We also show that the average reliability of links in the Minimum Spanning Tree is slightly greater than the average reliability of links in the Average Linkage Minimum Spanning Tree.
Parameter estimation in stochastic rainfall-runoff models
Jonsdottir, Harpa; Madsen, Henrik; Palsson, Olafur Petur
2006-01-01
the parameters, including the noise terms. The parameter estimation method is a maximum likelihood method (ML) where the likelihood function is evaluated using a Kalman filter technique. The ML method estimates the parameters in a prediction error settings, i.e. the sum of squared prediction error is minimized....... For a comparison the parameters are also estimated by an output error method, where the sum of squared simulation error is minimized. The former methodology is optimal for short-term prediction whereas the latter is optimal for simulations. Hence, depending on the purpose it is possible to select whether...... the parameter values are optimal for simulation or prediction. The data originates from Iceland and the model is designed for Icelandic conditions, including a snow routine for mountainous areas. The model demands only two input data series, precipitation and temperature and one output data series...
Kalman filter data assimilation: targeting observations and parameter estimation.
Bellsky, Thomas; Kostelich, Eric J; Mahalov, Alex
2014-06-01
This paper studies the effect of targeted observations on state and parameter estimates determined with Kalman filter data assimilation (DA) techniques. We first provide an analytical result demonstrating that targeting observations within the Kalman filter for a linear model can significantly reduce state estimation error as opposed to fixed or randomly located observations. We next conduct observing system simulation experiments for a chaotic model of meteorological interest, where we demonstrate that the local ensemble transform Kalman filter (LETKF) with targeted observations based on largest ensemble variance is skillful in providing more accurate state estimates than the LETKF with randomly located observations. Additionally, we find that a hybrid ensemble Kalman filter parameter estimation method accurately updates model parameters within the targeted observation context to further improve state estimation.
An easy and reliable automated method to estimate oxidative stress in the clinical setting.
Vassalle, Cristina
2008-01-01
During the last few years, reliable and simple tests have been proposed to estimate oxidative stress in vivo. Many of them can be easily adapted to automated analyzers, permitting the simultaneous processing of a large number of samples in a greatly reduced time, avoiding manual sample and reagent handling, and reducing variability sources. In this chapter, description of protocols for the estimation of reactive oxygen metabolites and the antioxidant capacity (respectively the d-ROMs and OXY Adsorbent Test, Diacron, Grosseto, Italy) by using the clinical chemistry analyzer SYNCHRON, CX 9 PRO (Beckman Coulter, Brea, CA, USA) is reported as an example of such an automated procedure that can be applied in the clinical setting. Furthermore, a calculation to compute a global oxidative stress index (Oxidative-INDEX), reflecting both oxidative and antioxidant counterparts, and, therefore, a potentially more powerful parameter, is also described.
Beef quality parameters estimation using ultrasound and color images
Nunes, Jose Luis; Piquerez, Martín; Pujadas, Leonardo; Armstrong,Eileen; Alicia FERNÁNDEZ; Lecumberry, Federico
2015-01-01
Background Beef quality measurement is a complex task with high economic impact. There is high interest in obtaining an automatic quality parameters estimation in live cattle or post mortem. In this paper we set out to obtain beef quality estimates from the analysis of ultrasound (in vivo) and color images (post mortem), with the measurement of various parameters related to tenderness and amount of meat: rib eye area, percentage of intramuscular fat and backfat thickness or subcutaneous fat. ...
Beef quality parameters estimation using ultrasound and color images
Nunes, Jose Luis; Piquerez, Mart?n; Pujadas, Leonardo; Armstrong,Eileen; Fern?ndez, Alicia; Lecumberry, Federico
2015-01-01
Background Beef quality measurement is a complex task with high economic impact. There is high interest in obtaining an automatic quality parameters estimation in live cattle or post mortem. In this paper we set out to obtain beef quality estimates from the analysis of ultrasound (in vivo) and color images (post mortem), with the measurement of various parameters related to tenderness and amount of meat: rib eye area, percentage of intramuscular fat and backfat thickness or subcutaneous fat. ...
Semidefinite Programming for Approximate Maximum Likelihood Sinusoidal Parameter Estimation
2009-01-01
We study the convex optimization approach for parameter estimation of several sinusoidal models, namely, single complex/real tone, multiple complex sinusoids, and single two-dimensional complex tone, in the presence of additive Gaussian noise. The major difficulty for optimally determining the parameters is that the corresponding maximum likelihood (ML) estimators involve finding the global minimum or maximum of multimodal cost functions because the frequencies are nonlinear in the observed s...
Simultaneous optimal experimental design for in vitro binding parameter estimation.
Ernest, C Steven; Karlsson, Mats O; Hooker, Andrew C
2013-10-01
Simultaneous optimization of in vitro ligand binding studies using an optimal design software package that can incorporate multiple design variables through non-linear mixed effect models and provide a general optimized design regardless of the binding site capacity and relative binding rates for a two binding system. Experimental design optimization was employed with D- and ED-optimality using PopED 2.8 including commonly encountered factors during experimentation (residual error, between experiment variability and non-specific binding) for in vitro ligand binding experiments: association, dissociation, equilibrium and non-specific binding experiments. Moreover, a method for optimizing several design parameters (ligand concentrations, measurement times and total number of samples) was examined. With changes in relative binding site density and relative binding rates, different measurement times and ligand concentrations were needed to provide precise estimation of binding parameters. However, using optimized design variables, significant reductions in number of samples provided as good or better precision of the parameter estimates compared to the original extensive sampling design. Employing ED-optimality led to a general experimental design regardless of the relative binding site density and relative binding rates. Precision of the parameter estimates were as good as the extensive sampling design for most parameters and better for the poorly estimated parameters. Optimized designs for in vitro ligand binding studies provided robust parameter estimation while allowing more efficient and cost effective experimentation by reducing the measurement times and separate ligand concentrations required and in some cases, the total number of samples.
A new method for parameter estimation in nonlinear dynamical equations
Wang, Liu; He, Wen-Ping; Liao, Le-Jian; Wan, Shi-Quan; He, Tao
2015-01-01
Parameter estimation is an important scientific problem in various fields such as chaos control, chaos synchronization and other mathematical models. In this paper, a new method for parameter estimation in nonlinear dynamical equations is proposed based on evolutionary modelling (EM). This will be achieved by utilizing the following characteristics of EM which includes self-organizing, adaptive and self-learning features which are inspired by biological natural selection, and mutation and genetic inheritance. The performance of the new method is demonstrated by using various numerical tests on the classic chaos model—Lorenz equation (Lorenz 1963). The results indicate that the new method can be used for fast and effective parameter estimation irrespective of whether partial parameters or all parameters are unknown in the Lorenz equation. Moreover, the new method has a good convergence rate. Noises are inevitable in observational data. The influence of observational noises on the performance of the presented method has been investigated. The results indicate that the strong noises, such as signal noise ratio (SNR) of 10 dB, have a larger influence on parameter estimation than the relatively weak noises. However, it is found that the precision of the parameter estimation remains acceptable for the relatively weak noises, e.g. SNR is 20 or 30 dB. It indicates that the presented method also has some anti-noise performance.
Khan, M.A.; Kerkhoff, Hans G.
2013-01-01
Reliability of electronic systems has been thoroughly investigated in literature and a number of analytical approaches at the design stage are already available via examination of the circuit-level reliability effects based on device-level models. Reliability estimation during operational life of an
Khan, Muhammad Aamir; Kerkhoff, Hans G.
2013-01-01
Reliability of electronic systems has been thoroughly investigated in literature and a number of analytical approaches at the design stage are already available via examination of the circuit-level reliability effects based on device-level models. Reliability estimation during operational life of an
Accelerated maximum likelihood parameter estimation for stochastic biochemical systems
Daigle Bernie J
2012-05-01
Full Text Available Abstract Background A prerequisite for the mechanistic simulation of a biochemical system is detailed knowledge of its kinetic parameters. Despite recent experimental advances, the estimation of unknown parameter values from observed data is still a bottleneck for obtaining accurate simulation results. Many methods exist for parameter estimation in deterministic biochemical systems; methods for discrete stochastic systems are less well developed. Given the probabilistic nature of stochastic biochemical models, a natural approach is to choose parameter values that maximize the probability of the observed data with respect to the unknown parameters, a.k.a. the maximum likelihood parameter estimates (MLEs. MLE computation for all but the simplest models requires the simulation of many system trajectories that are consistent with experimental data. For models with unknown parameters, this presents a computational challenge, as the generation of consistent trajectories can be an extremely rare occurrence. Results We have developed Monte Carlo Expectation-Maximization with Modified Cross-Entropy Method (MCEM2: an accelerated method for calculating MLEs that combines advances in rare event simulation with a computationally efficient version of the Monte Carlo expectation-maximization (MCEM algorithm. Our method requires no prior knowledge regarding parameter values, and it automatically provides a multivariate parameter uncertainty estimate. We applied the method to five stochastic systems of increasing complexity, progressing from an analytically tractable pure-birth model to a computationally demanding model of yeast-polarization. Our results demonstrate that MCEM2 substantially accelerates MLE computation on all tested models when compared to a stand-alone version of MCEM. Additionally, we show how our method identifies parameter values for certain classes of models more accurately than two recently proposed computationally efficient methods
IRT Item Parameters and the Reliability and Validity of Pretest, Posttest, and Gain Scores
May, Kim; Jackson, Tameika S.
2005-01-01
The effect of different combinations of item response theory (IRT) item parameters (item difficulty, item discrimination, and the guessing probability) on the reliability and construct validity (correlation with the latent trait being measured) of pretest, posttest, and gain scores was analytically examined using the 3-parameter logistic (3PL)…
A FAST PARAMETER ESTIMATION ALGORITHM FOR POLYPHASE CODED CW SIGNALS
Li Hong; Qin Yuliang; Wang Hongqiang; Li Yanpeng; Li Xiang
2011-01-01
A fast parameter estimation algorithm is discussed for a polyphase coded Continuous Waveform (CW) signal in Additive White Gaussian Noise (AWGN).The proposed estimator is based on the sum of the modulus square of the ambiguity function at the different Doppler shifts.An iterative refinement stage is proposed to avoid the effect of the spurious peaks that arise when the summation length of the estimator exceeds the subcode duration.The theoretical variance of the subcode rate estimate is derived.The Monte-Carlo simulation results show that the proposed estimator is highly accurate and effective at moderate Signal-to-Noise Ratio (SNR).
Operational Procedures for Optimized Reliability and Component Life Estimator (ORACLE)
1975-12-01
TOTAL FAILURE RATE AND TTF Figure 1. Block diagram of the reliability predicition program routines (cross hatched boxes), the required inputs and the...in some signifi- cant way, describe and/or identify the particular piece of equipment associated with the parts or module. The maintenance of a
Estimation of Parameters of the Beta-Extreme Value Distribution
Zafar Iqbal
2008-09-01
Full Text Available In this research paper The Beta Extreme Value Type (III distribution which is developed by Zafar and Aleem (2007 is considered and parameters are estimated by using moments of the Beta-Extreme Value (Type III Distribution when the parameters ‘m’ & ‘n’ are real and moments of the Beta-Extreme Value (Type III Distribution when the parameters ‘m��� & ‘n’ are integers and then a Comparison between rth moments about origin when parameters are ‘m’ & ‘n’ are real and when parameters are ‘m’ & ‘n’ are integers. At the end second method, method of Maximum Likelihood is used to estimate the unknown parameters of the Beta Extreme Value Type (III distribution.
Estimation of shape model parameters for 3D surfaces
Erbou, Søren Gylling Hemmingsen; Darkner, Sune; Fripp, Jurgen;
2008-01-01
Statistical shape models are widely used as a compact way of representing shape variation. Fitting a shape model to unseen data enables characterizing the data in terms of the model parameters. In this paper a Gauss-Newton optimization scheme is proposed to estimate shape model parameters of 3D s...
Parameter Estimation for a Computable General Equilibrium Model
Arndt, Channing; Robinson, Sherman; Tarp, Finn
. Second, it permits incorporation of prior information on parameter values. Third, it can be applied in the absence of copious data. Finally, it supplies measures of the capacity of the model to reproduce the historical record and the statistical significance of parameter estimates. The method is applied...
In-Flight Parameter Estimation for Multirotor Aerial Vehicles
de Castro Davi Ferreira
2016-01-01
Full Text Available This paper proposes a method for in-flight parameter estimation for Multirotor Aerial Vehicles (MAV. This task is important because it provides parameters with better accuracy for the actual vehicle operation. In order to simulate a flight it is adopted a simulation environment Software-In-the-Loop (SIL.
Parameter estimation of gravitational wave compact binary coalescences
Haster, Carl-Johan; LIGO Scientific Collaboration Collaboration
2017-01-01
The first detections of gravitational waves from coalescing binary black holes have allowed unprecedented inference on the astrophysical parameters of such binaries. Given recent updates in detector capabilities, gravitational wave model templates and data analysis techniques, in this talk I will describe the prospects of parameter estimation of compact binary coalescences during the second observation run of the LIGO-Virgo collaboration.
Simultaneous estimation of parameters in the bivariate Emax model.
Magnusdottir, Bergrun T; Nyquist, Hans
2015-12-10
In this paper, we explore inference in multi-response, nonlinear models. By multi-response, we mean models with m > 1 response variables and accordingly m relations. Each parameter/explanatory variable may appear in one or more of the relations. We study a system estimation approach for simultaneous computation and inference of the model and (co)variance parameters. For illustration, we fit a bivariate Emax model to diabetes dose-response data. Further, the bivariate Emax model is used in a simulation study that compares the system estimation approach to equation-by-equation estimation. We conclude that overall, the system estimation approach performs better for the bivariate Emax model when there are dependencies among relations. The stronger the dependencies, the more we gain in precision by using system estimation rather than equation-by-equation estimation.
Parameter Estimation for a Computable General Equilibrium Model
Arndt, Channing; Robinson, Sherman; Tarp, Finn
2002-01-01
We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of non-linear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...
Parameter estimation of hidden periodic model in random fields
何书元
1999-01-01
Two-dimensional hidden periodic model is an important model in random fields. The model is used in the field of two-dimensional signal processing, prediction and spectral analysis. A method of estimating the parameters for the model is designed. The strong consistency of the estimators is proved.
Parameter Estimation for a Computable General Equilibrium Model
Arndt, Channing; Robinson, Sherman; Tarp, Finn
We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of nonlinear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...
On the Nature of SEM Estimates of ARMA Parameters.
Hamaker, Ellen L.; Dolan, Conor V.; Molenaar, Peter C. M.
2002-01-01
Reexamined the nature of structural equation modeling (SEM) estimates of autoregressive moving average (ARMA) models, replicated the simulation experiments of P. Molenaar, and examined the behavior of the log-likelihood ratio test. Simulation studies indicate that estimates of ARMA parameters observed with SEM software are identical to those…
Distribution Line Parameter Estimation Under Consideration of Measurement Tolerances
Prostejovsky, Alexander; Gehrke, Oliver; Kosek, Anna Magdalena
2016-01-01
State estimation and control approaches in electric distribution grids rely on precise electric models that may be inaccurate. This work presents a novel method of estimating distribution line parameters using only root mean square voltage and power measurements under consideration of measurement...
Problems of reliability in earthquake parameters determination from historicaI records
G. Monachesi
1996-06-01
Full Text Available Earthquake parameters determination from macroseismic data is a procedure, the reliability of whose results can be impaired by many problems related to quality, number and distribution of data. Such problems are common with ancient, sketchily documented events, but can affect even comparatively recent earthquakes. This paper presents some cases of Central Italy earthquakes, the determination of whose epicentral parameters involved problems of reliability. Not all problems can ever be completely solved. It is therefore necessary to devise ways for putting on record the uncertainty of the resulting parameters, so that future users can be aware of them.
Aslan, Serdar; Taylan Cemgil, Ali; Akın, Ata
2016-08-01
Objective. In this paper, we aimed for the robust estimation of the parameters and states of the hemodynamic model by using blood oxygen level dependent signal. Approach. In the fMRI literature, there are only a few successful methods that are able to make a joint estimation of the states and parameters of the hemodynamic model. In this paper, we implemented a maximum likelihood based method called the particle smoother expectation maximization (PSEM) algorithm for the joint state and parameter estimation. Main results. Former sequential Monte Carlo methods were only reliable in the hemodynamic state estimates. They were claimed to outperform the local linearization (LL) filter and the extended Kalman filter (EKF). The PSEM algorithm is compared with the most successful method called square-root cubature Kalman smoother (SCKS) for both state and parameter estimation. SCKS was found to be better than the dynamic expectation maximization (DEM) algorithm, which was shown to be a better estimator than EKF, LL and particle filters. Significance. PSEM was more accurate than SCKS for both the state and the parameter estimation. Hence, PSEM seems to be the most accurate method for the system identification and state estimation for the hemodynamic model inversion literature. This paper do not compare its results with Tikhonov-regularized Newton—CKF (TNF-CKF), a recent robust method which works in filtering sense.
Bayesian parameter estimation for nonlinear modelling of biological pathways
Ghasemi Omid
2011-12-01
Full Text Available Abstract Background The availability of temporal measurements on biological experiments has significantly promoted research areas in systems biology. To gain insight into the interaction and regulation of biological systems, mathematical frameworks such as ordinary differential equations have been widely applied to model biological pathways and interpret the temporal data. Hill equations are the preferred formats to represent the reaction rate in differential equation frameworks, due to their simple structures and their capabilities for easy fitting to saturated experimental measurements. However, Hill equations are highly nonlinearly parameterized functions, and parameters in these functions cannot be measured easily. Additionally, because of its high nonlinearity, adaptive parameter estimation algorithms developed for linear parameterized differential equations cannot be applied. Therefore, parameter estimation in nonlinearly parameterized differential equation models for biological pathways is both challenging and rewarding. In this study, we propose a Bayesian parameter estimation algorithm to estimate parameters in nonlinear mathematical models for biological pathways using time series data. Results We used the Runge-Kutta method to transform differential equations to difference equations assuming a known structure of the differential equations. This transformation allowed us to generate predictions dependent on previous states and to apply a Bayesian approach, namely, the Markov chain Monte Carlo (MCMC method. We applied this approach to the biological pathways involved in the left ventricle (LV response to myocardial infarction (MI and verified our algorithm by estimating two parameters in a Hill equation embedded in the nonlinear model. We further evaluated our estimation performance with different parameter settings and signal to noise ratios. Our results demonstrated the effectiveness of the algorithm for both linearly and nonlinearly
A New Approach for Parameter Estimation of Mixed Weibull Distribution:A Case Study in Spindle
Dongwei Gu; Zhiqiong Wang; Guixiang Shen; Yingzhi Zhang; Xilu Zhao
2016-01-01
In order to improve the accuracy and efficiency of graphical method and maximum likelihood estimation ( MLE) in Mixed Weibull distribution parameters estimation, Graphical-GA combines the advantage of graphical method and genetic algorithm ( GA) is proposed. Firstly, with the analysis of Weibull probability paper (WPP), mixed Weibull is identified to data fitting. Secondly, the observed value of shape and scale parameters are obtained by graphical method with least square, then optimizing the parameters of mixed Weibull with GA. Thirdly, with the comparative analysis on graphical method, piecewise Weibull and two⁃Weibull, it shows graphical⁃GA mixed Weibull is the best. Finally, the spindle MTBF point estimation and interval estimation are got based on mixed Weibull distribution. The results indicate that graphical⁃GA are improved effectively and the evaluation of spindle can provide the basis for design and reliability growth.
Adaptive Unified Biased Estimators of Parameters in Linear Model
Hu Yang; Li-xing Zhu
2004-01-01
To tackle multi collinearity or ill-conditioned design matrices in linear models,adaptive biased estimators such as the time-honored Stein estimator,the ridge and the principal component estimators have been studied intensively.To study when a biased estimator uniformly outperforms the least squares estimator,some suficient conditions are proposed in the literature.In this paper,we propose a unified framework to formulate a class of adaptive biased estimators.This class includes all existing biased estimators and some new ones.A suficient condition for outperforming the least squares estimator is proposed.In terms of selecting parameters in the condition,we can obtain all double-type conditions in the literature.
MPEG2 video parameter and no reference PSNR estimation
Li, Huiying; Forchhammer, Søren
2009-01-01
MPEG coded video may be processed for quality assessment or postprocessed to reduce coding artifacts or transcoded. Utilizing information about the MPEG stream may be useful for these tasks. This paper deals with estimating MPEG parameter information from the decoded video stream without access t...... DCT coefficients, the PSNR is estimated from the decoded video without reference images. Tests on decoded fixed rate MPEG2 sequences demonstrate perfect detection rates and good performance of the PSNR estimation....
Global parameter estimation methods for stochastic biochemical systems
Poovathingal Suresh
2010-08-01
Full Text Available Abstract Background The importance of stochasticity in cellular processes having low number of molecules has resulted in the development of stochastic models such as chemical master equation. As in other modelling frameworks, the accompanying rate constants are important for the end-applications like analyzing system properties (e.g. robustness or predicting the effects of genetic perturbations. Prior knowledge of kinetic constants is usually limited and the model identification routine typically includes parameter estimation from experimental data. Although the subject of parameter estimation is well-established for deterministic models, it is not yet routine for the chemical master equation. In addition, recent advances in measurement technology have made the quantification of genetic substrates possible to single molecular levels. Thus, the purpose of this work is to develop practical and effective methods for estimating kinetic model parameters in the chemical master equation and other stochastic models from single cell and cell population experimental data. Results Three parameter estimation methods are proposed based on the maximum likelihood and density function distance, including probability and cumulative density functions. Since stochastic models such as chemical master equations are typically solved using a Monte Carlo approach in which only a finite number of Monte Carlo realizations are computationally practical, specific considerations are given to account for the effect of finite sampling in the histogram binning of the state density functions. Applications to three practical case studies showed that while maximum likelihood method can effectively handle low replicate measurements, the density function distance methods, particularly the cumulative density function distance estimation, are more robust in estimating the parameters with consistently higher accuracy, even for systems showing multimodality. Conclusions The parameter
Iwankiewicz, R.; Nielsen, Søren R. K.; Skjærbæk, P. S.
The subject of the paper is the investigation of the sensitivity of structural reliability estimation by a reduced hysteretic model for a reinforced concrete frame under an earthquake excitation.......The subject of the paper is the investigation of the sensitivity of structural reliability estimation by a reduced hysteretic model for a reinforced concrete frame under an earthquake excitation....
Algorithms of optimum location of sensors for solidification parameters estimation
J. Mendakiewicz
2010-10-01
Full Text Available The algorithms of optimal sensor location for estimation of solidification parameters are discussed. These algorithms base on the Fisher Information Matrix and A-optimality or D-optimality criterion. Numerical examples of planning algorithms are presented and next foroptimal position of sensors the inverse problems connected with the identification of unknown parameters are solved. The examplespresented concern the simultaneous estimation of mould thermophysical parameters (volumetric specific heat and thermal conductivityand also the components of volumetric latent heat of cast iron.
Estimation of the input parameters in the Feller neuronal model
Ditlevsen, Susanne; Lansky, Petr
2006-06-01
The stochastic Feller neuronal model is studied, and estimators of the model input parameters, depending on the firing regime of the process, are derived. Closed expressions for the first two moments of functionals of the first-passage time (FTP) through a constant boundary in the suprathreshold regime are derived, which are used to calculate moment estimators. In the subthreshold regime, the exponentiality of the FTP is utilized to characterize the input parameters. The methods are illustrated on simulated data. Finally, approximations of the first-passage-time moments are suggested, and biological interpretations and comparisons of the parameters in the Feller and the Ornstein-Uhlenbeck models are discussed.
Estimating Illumination Parameters Using Spherical Harmonics Coefficients in Frequency Space
XIE Feng; TAO Linmi; XU Guangyou
2007-01-01
An algorithm is presented for estimating the direction and strength of point light with the strength of ambient illumination. Existing approaches evaluate these illumination parameters directly in the high dimensional image space, while we estimate the parameters in two steps:first by projecting the image to an orthogonal linear subspace based on spherical harmonic basis functions and then by calculating the parameters in the low dimensional subspace.The test results using the CMU PIE database and Yale Database B show the stability and effectiveness of the method.The resulting illumination information can be used to synthesize more realistic relighting images and to recognize objects under variable illumination.
Maximum Likelihood Estimation of the Identification Parameters and Its Correction
无
2002-01-01
By taking the subsequence out of the input-output sequence of a system polluted by white noise, anindependent observation sequence and its probability density are obtained and then a maximum likelihood estimation of theidentification parameters is given. In order to decrease the asymptotic error, a corrector of maximum likelihood (CML)estimation with its recursive algorithm is given. It has been proved that the corrector has smaller asymptotic error thanthe least square methods. A simulation example shows that the corrector of maximum likelihood estimation is of higherapproximating precision to the true parameters than the least square methods.
Parameter Estimation of the Extended Vasiček Model
Rujivan, Sanae
2010-01-01
In this paper, an estimate of the drift and diffusion parameters of the extended Vasiček model is presented. The estimate is based on the method of maximum likelihood. We derive a closed-form expansion for the transition (probability) density of the extended Vasiček process and use the expansion to construct an approximate log-likelihood function of a discretely sampled data of the process. Approximate maximum likelihood estimators (AMLEs) of the parameters are obtained by maximizing the appr...
Engineer’s estimate reliability and statistical characteristics of bids
Fariborz M. Tehrani
2016-12-01
Full Text Available The objective of this report is to provide a methodology for examining bids and evaluating the performance of engineer’s estimates in capturing the true cost of projects. This study reviews the cost development for transportation projects in addition to two sources of uncertainties in a cost estimate, including modeling errors and inherent variability. Sample projects are highway maintenance projects with a similar scope of the work, size, and schedule. Statistical analysis of engineering estimates and bids examines the adaptability of statistical models for sample projects. Further, the variation of engineering cost estimates from inception to implementation has been presented and discussed for selected projects. Moreover, the applicability of extreme values theory is assessed for available data. The results indicate that the performance of engineer’s estimate is best evaluated based on trimmed average of bids, excluding discordant bids.
Dynamic Load Model using PSO-Based Parameter Estimation
Taoka, Hisao; Matsuki, Junya; Tomoda, Michiya; Hayashi, Yasuhiro; Yamagishi, Yoshio; Kanao, Norikazu
This paper presents a new method for estimating unknown parameters of dynamic load model as a parallel composite of a constant impedance load and an induction motor behind a series constant reactance. An adequate dynamic load model is essential for evaluating power system stability, and this model can represent the behavior of actual load by using appropriate parameters. However, the problem of this model is that a lot of parameters are necessary and it is not easy to estimate a lot of unknown parameters. We propose an estimating method based on Particle Swarm Optimization (PSO) which is a non-linear optimization method by using the data of voltage, active power and reactive power measured at voltage sag.
Parameter Estimation of Damped Compound Pendulum Using Bat Algorithm
Saad Mohd Sazli
2016-01-01
Full Text Available In this study, the parameter identification of the damped compound pendulum system is proposed using one of the most promising nature inspired algorithms which is Bat Algorithm (BA. The procedure used to achieve the parameter identification of the experimental system consists of input-output data collection, ARX model order selection and parameter estimation using bat algorithm (BA method. PRBS signal is used as an input signal to regulate the motor speed. Whereas, the output signal is taken from position sensor. Both, input and output data is used to estimate the parameter of the autoregressive with exogenous input (ARX model. The performance of the model is validated using mean squares error (MSE between the actual and predicted output responses of the models. Finally, comparative study is conducted between BA and the conventional estimation method (i.e. Least Square. Based on the results obtained, MSE produce from Bat Algorithm (BA is outperformed the Least Square (LS method.
Iterative methods for distributed parameter estimation in parabolic PDE
Vogel, C.R. [Montana State Univ., Bozeman, MT (United States); Wade, J.G. [Bowling Green State Univ., OH (United States)
1994-12-31
The goal of the work presented is the development of effective iterative techniques for large-scale inverse or parameter estimation problems. In this extended abstract, a detailed description of the mathematical framework in which the authors view these problem is presented, followed by an outline of the ideas and algorithms developed. Distributed parameter estimation problems often arise in mathematical modeling with partial differential equations. They can be viewed as inverse problems; the `forward problem` is that of using the fully specified model to predict the behavior of the system. The inverse or parameter estimation problem is: given the form of the model and some observed data from the system being modeled, determine the unknown parameters of the model. These problems are of great practical and mathematical interest, and the development of efficient computational algorithms is an active area of study.
Robust Parameter and Signal Estimation in Induction Motors
Børsting, H.
in nonlinear systems, have been exposed. The main objectives of this project are: - analysis and application of theories and methods for robust estimation of parameters in a model structure, obtained from knowledge of the physics of the induction motor. - analysis and application of theories and methods......-time approximation. All methods and theories have been evaluated on the basis of experimental results obtained from measurements on a laboratory setup. Standard methods have been modified and combined to obtain usable solutions to the estimation problems. The major results of the work can be summarized as follows......: - identifiability has been treated in theory and practice in connection with parameter and signal estimation in induction motors. - a non recursive prediction error method has successfully been used to estimate physical related parameters in a continuous-time model of the induction motor. The speed of the rotor has...
Estimating the Reliability of Electronic Parts in High Radiation Fields
Everline, Chester; Clark, Karla; Man, Guy; Rasmussen, Robert; Johnston, Allan; Kohlhase, Charles; Paulos, Todd
2008-01-01
Radiation effects on materials and electronic parts constrain the lifetime of flight systems visiting Europa. Understanding mission lifetime limits is critical to the design and planning of such a mission. Therefore, the operational aspects of radiation dose are a mission success issue. To predict and manage mission lifetime in a high radiation environment, system engineers need capable tools to trade radiation design choices against system design and reliability, and science achievements. Conventional tools and approaches provided past missions with conservative designs without the ability to predict their lifetime beyond the baseline mission.This paper describes a more systematic approach to understanding spacecraft design margin, allowing better prediction of spacecraft lifetime. This is possible because of newly available electronic parts radiation effects statistics and an enhanced spacecraft system reliability methodology. This new approach can be used in conjunction with traditional approaches for mission design. This paper describes the fundamentals of the new methodology.
Tube-Load Model Parameter Estimation for Monitoring Arterial Hemodynamics
Guanqun eZhang
2011-11-01
Full Text Available A useful model of the arterial system is the uniform, lossless tube with parametric load. This tube-load model is able to account for wave propagation and reflection (unlike lumped-parameter models such as the Windkessel while being defined by only a few parameters (unlike comprehensive distributed-parameter models. As a result, the parameters may be readily estimated by accurate fitting of the model to available arterial pressure and flow waveforms so as to permit improved monitoring of arterial hemodynamics. In this paper, we review tube-load model parameter estimation techniques that have appeared in the literature for monitoring wave reflection, large artery compliance, pulse transit time, and central aortic pressure. We begin by motivating the use of the tube-load model for parameter estimation. We then describe the tube-load model, its assumptions and validity, and approaches for estimating its parameters. We next summarize the various techniques and their experimental results while highlighting their advantages over conventional techniques. We conclude the review by suggesting future research directions and describing potential applications.
Tube-Load Model Parameter Estimation for Monitoring Arterial Hemodynamics
Zhang, Guanqun; Hahn, Jin-Oh; Mukkamala, Ramakrishna
2011-01-01
A useful model of the arterial system is the uniform, lossless tube with parametric load. This tube-load model is able to account for wave propagation and reflection (unlike lumped-parameter models such as the Windkessel) while being defined by only a few parameters (unlike comprehensive distributed-parameter models). As a result, the parameters may be readily estimated by accurate fitting of the model to available arterial pressure and flow waveforms so as to permit improved monitoring of arterial hemodynamics. In this paper, we review tube-load model parameter estimation techniques that have appeared in the literature for monitoring wave reflection, large artery compliance, pulse transit time, and central aortic pressure. We begin by motivating the use of the tube-load model for parameter estimation. We then describe the tube-load model, its assumptions and validity, and approaches for estimating its parameters. We next summarize the various techniques and their experimental results while highlighting their advantages over conventional techniques. We conclude the review by suggesting future research directions and describing potential applications. PMID:22053157
Zhang, Chuan-Xin; Zhang, Hao-Wei; Shuai, Yong; Tan, He-Ping
2016-01-01
Considering features of stellar spectral radiation and survey explorers, we established a computational model for stellar effective temperatures, detected angular parameters, and gray rates. Using known stellar flux data in some band, we estimated stellar effective temperatures and detected angular parameters using stochastic particle swarm optimization (SPSO). We first verified the reliability of SPSO, and then determined reasonable parameters that produced highly accurate estimates under certain gray deviation levels. Finally, we calculated 177,860 stellar effective temperatures and detected angular parameters using the Midcourse Space Experiment (MSX) catalog data. These derived stellar effective temperatures were accurate when we compared them to known values from literatures. This research made full use of catalog data and presented an original technique for studying stellar characteristics. It proposed a novel method for calculating stellar effective temperatures and detected angular parameters, and pro...
Zhang, Chuan-Xin; Yuan, Yuan; Zhang, Hao-Wei; Shuai, Yong; Tan, He-Ping
2016-09-01
Considering features of stellar spectral radiation and sky surveys, we established a computational model for stellar effective temperatures, detected angular parameters and gray rates. Using known stellar flux data in some bands, we estimated stellar effective temperatures and detected angular parameters using stochastic particle swarm optimization (SPSO). We first verified the reliability of SPSO, and then determined reasonable parameters that produced highly accurate estimates under certain gray deviation levels. Finally, we calculated 177 860 stellar effective temperatures and detected angular parameters using data from the Midcourse Space Experiment (MSX) catalog. These derived stellar effective temperatures were accurate when we compared them to known values from literatures. This research makes full use of catalog data and presents an original technique for studying stellar characteristics. It proposes a novel method for calculating stellar effective temperatures and detecting angular parameters, and provides theoretical and practical data for finding information about radiation in any band.
A software for parameter estimation in dynamic models
M. Yuceer
2008-12-01
Full Text Available A common problem in dynamic systems is to determine parameters in an equation used to represent experimental data. The goal is to determine the values of model parameters that provide the best fit to measured data, generally based on some type of least squares or maximum likelihood criterion. In the most general case, this requires the solution of a nonlinear and frequently non-convex optimization problem. Some of the available software lack in generality, while others do not provide ease of use. A user-interactive parameter estimation software was needed for identifying kinetic parameters. In this work we developed an integration based optimization approach to provide a solution to such problems. For easy implementation of the technique, a parameter estimation software (PARES has been developed in MATLAB environment. When tested with extensive example problems from literature, the suggested approach is proven to provide good agreement between predicted and observed data within relatively less computing time and iterations.
Hamacher, Daniel; Hamacher, Dennis; Taylor, William R; Singh, Navrag B; Schega, Lutz
2014-04-01
While camera-based motion tracking systems are considered to be the gold standard for kinematic analysis, these systems are not practical in clinical practice. However, the collection of gait parameters using inertial sensors is feasible in clinical settings and less expensive, but suffers from drift error that excludes accurate analyses. The goal of this study was to apply a combination of repetitive sensor position re-calibration techniques in order to improve the intra-day and inter-day reliability of gait parameters using inertial sensors. Kinematic data of nineteen healthy elderly individuals were captured twice within the first day and once on a second day after one week using inertial sensors fixed on the subject's forefoot during gait. Parameters of walking speed, minimum foot clearance (MFC), minimum toe clearance (MTC), stride length, stance time and swing time, as well as their corresponding measures of variability were calculated. Intra-day and inter-day differences were rated using intra-class correlation coefficients (ICC(3,1)), as well as the bias and limits of agreement. The results indicate excellent reliability for all intra-day and inter-day mean parameters (ICC: MFC 0.83-stride length 0.99). While good to excellent reliability was observed during intra-day parameters of variability (ICC: walking speed 0.71-MTC 0.98), corresponding inter-day reliability ranged from poor to excellent (ICC: walking speed 0.32-MTC 0.95). In conclusion, the system is suitable for reliable measurement of mean temporo-spatial parameters and the variability of MFC and MTC. However, the system's accuracy needs to be improved before remaining parameters of variability can reliably be collected.
Gang Zhang
2017-07-01
Full Text Available The Sacramento model is widely utilized in hydrological forecast, of which the accuracy and performance are primarily determined by the model parameters, indicating the key role of parameter estimation. This paper presents a multi-step parameter estimation method, which divides the parameter estimation of Sacramento model into three steps and realizes optimization step by step. We firstly use the immune clonal selection algorithm (ICSA to solve the non-liner objective function of parameter estimation, and compare the parameter calibration result of ideal artificial data with Shuffled Complex Evolution (SCE-UA, Parallel Genetic Algorithm (PGA, and Serial Master-slaver Swarms Shuffling Evolution Algorithm Based on Particle Swarms Optimization (SMSE-PSO. The comparison result shows that ICSA has the best convergence, efficiency and precision. Then we apply ICSA to the parameter estimation of single-step and multi-step Sacramento model and simulate 32 floods based on application examples of Dongyang and Tantou river basins for validation. It is clearly shown that the results of multi-step method based on ICSA show higher accuracy and 100% qualified rate, indicating its higher precision and reliability, which has great potential to improve Sacramento model and hydrological forecast.
Improved Accuracy of Nonlinear Parameter Estimation with LAV and Interval Arithmetic Methods
Humberto Muñoz
2009-06-01
Full Text Available The reliable solution of nonlinear parameter es- timation problems is an important computational problem in many areas of science and engineering, including such applications as real time optimization. Its goal is to estimate accurate model parameters that provide the best ﬁt to measured data, despite small- scale noise in the data or occasional large-scale mea- surement errors (outliers. In general, the estimation techniques are based on some kind of least squares or maximum likelihood criterion, and these require the solution of a nonlinear and non-convex optimiza- tion problem. Classical solution methods for these problems are local methods, and may not be reliable for ﬁnding the global optimum, with no guarantee the best model parameters have been found. Interval arithmetic can be used to compute completely and reliably the global optimum for the nonlinear para- meter estimation problem. Finally, experimental re- sults will compare the least squares, l2, and the least absolute value, l1, estimates using interval arithmetic in a chemical engineering application.
Li Wen XU; Song Gui WANG
2007-01-01
In this paper, the authors address the problem of the minimax estimator of linear com-binations of stochastic regression coefficients and parameters in the general normal linear model with random effects. Under a quadratic loss function, the minimax property of linear estimators is inves- tigated. In the class of all estimators, the minimax estimator of estimable functions, which is unique with probability 1, is obtained under a multivariate normal distribution.
Steven E. Stemler
2004-03-01
Full Text Available This article argues that the general practice of describing interrater reliability as a single, unified concept is..at best imprecise, and at worst potentially misleading. Rather than representing a single concept, different..statistical methods for computing interrater reliability can be more accurately classified into one of three..categories based upon the underlying goals of analysis. The three general categories introduced and..described in this paper are: 1 consensus estimates, 2 consistency estimates, and 3 measurement estimates...The assumptions, interpretation, advantages, and disadvantages of estimates from each of these three..categories are discussed, along with several popular methods of computing interrater reliability coefficients..that fall under the umbrella of consensus, consistency, and measurement estimates. Researchers and..practitioners should be aware that different approaches to estimating interrater reliability carry with them..different implications for how ratings across multiple judges should be summarized, which may impact the..validity of subsequent study results.
Reliability of electromyography parameters during stair deambulation in patellofemoral pain syndrome
Marcella Ferraz Pazzinatto
2015-06-01
Full Text Available Reliability is essential to all aspects of the measure, as it shows the quality of the information and allows rational conclusions with regard to the data. There has been controversial results regarding the reliability of electromyographic parameters assessed during stair ascent and descent in individuals with patellofemoral pain syndrome (PFPS. Therefore, this study aims to determine the reliability of time and frequency domain electromyographic parameters on both gestures in women with PFPS. Thirty-one women with PFPS were selected to participate in this study. Data from vastus lateralis and medialis were collected during stair deambulation. The selected parameters were: automatic onset, median frequency bands of low, medium and high frequency. Reliability was determined by intraclass correlation coefficient and the standard error of measurement. The frequency domain variables have shown good reliability, with the stair ascent presenting the best rates. On the other hand, onset has proved to be inconsistent in all measures. Our findings suggest that stair ascent is more reliable than stair descent to evaluate subjects with PFPS in the most cases.
The reliable solution and computation time of variable parameters Logistic model
Pengfei, Wang
2016-01-01
The reliable computation time (RCT, marked as Tc) when applying a double precision computation of a variable parameters logistic map (VPLM) is studied. First, using the method proposed, the reliable solutions for the logistic map are obtained. Second, for a time-dependent non-stationary parameters VPLM, 10000 samples of reliable experiments are constructed, and the mean Tc is then computed. The results indicate that for each different initial value, the Tcs of the VPLM are generally different. However, the mean Tc trends to a constant value once the sample number is large enough. The maximum, minimum and probable distribution function of Tc is also obtained, which can help us to identify the robustness of applying a nonlinear time series theory to forecasting while using the VPLM output. In addition, the Tc of the fixed parameter experiments of the logistic map was obtained, and the results suggested that this Tc matches the theoretical formula predicted value.
Traveltime approximations and parameter estimation for orthorhombic media
Masmoudi, Nabil
2016-05-30
Building anisotropy models is necessary for seismic modeling and imaging. However, anisotropy estimation is challenging due to the trade-off between inhomogeneity and anisotropy. Luckily, we can estimate the anisotropy parameters Building anisotropy models is necessary for seismic modeling and imaging. However, anisotropy estimation is challenging due to the trade-off between inhomogeneity and anisotropy. Luckily, we can estimate the anisotropy parameters if we relate them analytically to traveltimes. Using perturbation theory, we have developed traveltime approximations for orthorhombic media as explicit functions of the anellipticity parameters η1, η2, and Δχ in inhomogeneous background media. The parameter Δχ is related to Tsvankin-Thomsen notation and ensures easier computation of traveltimes in the background model. Specifically, our expansion assumes an inhomogeneous ellipsoidal anisotropic background model, which can be obtained from well information and stacking velocity analysis. We have used the Shanks transform to enhance the accuracy of the formulas. A homogeneous medium simplification of the traveltime expansion provided a nonhyperbolic moveout description of the traveltime that was more accurate than other derived approximations. Moreover, the formulation provides a computationally efficient tool to solve the eikonal equation of an orthorhombic medium, without any constraints on the background model complexity. Although, the expansion is based on the factorized representation of the perturbation parameters, smooth variations of these parameters (represented as effective values) provides reasonable results. Thus, this formulation provides a mechanism to estimate the three effective parameters η1, η2, and Δχ. We have derived Dix-type formulas for orthorhombic medium to convert the effective parameters to their interval values.
Novel Software Reliability Estimation Model for Altering Paradigms of Software Engineering
Ritika Wason
2012-05-01
Full Text Available A number of different software engineering paradigms like Component-Based Software Engineering (CBSE, Autonomic Computing, Service-Oriented Computing (SOC, Fault-Tolerant Computing and many others are being researched currently. These paradigms denote a paradigm shift from the currently mainstream object-oriented paradigm and are altering the way we view, design, develop and exercise software. Though these paradigms indicate a major shift in the way we design and code software. However, we still rely on traditional reliability models for estimating the reliability of any of the above systems. This paper analyzes the underlying characteristics of these paradigms and proposes a novel Finite Automata Based Reliability model as a suitable model for estimating reliability of modern, complex, distributed and critical software applications. We further outline the basic framework for an intelligent, automata-based reliability model that can be used for accurate estimation of system reliability of software systems at any point in the software life cycle.
Meliopoulos, Sakis [Georgia Inst. of Technology, Atlanta, GA (United States); Cokkinides, George [Georgia Inst. of Technology, Atlanta, GA (United States); Fardanesh, Bruce [New York Power Authority, NY (United States); Hedrington, Clinton [U.S. Virgin Islands Water and Power Authority (WAPA), St. Croix (U.S. Virgin Islands)
2013-12-31
This is the final report for this project that was performed in the period: October1, 2009 to June 30, 2013. In this project, a fully distributed high-fidelity dynamic state estimator (DSE) that continuously tracks the real time dynamic model of a wide area system with update rates better than 60 times per second is achieved. The proposed technology is based on GPS-synchronized measurements but also utilizes data from all available Intelligent Electronic Devices in the system (numerical relays, digital fault recorders, digital meters, etc.). The distributed state estimator provides the real time model of the system not only the voltage phasors. The proposed system provides the infrastructure for a variety of applications and two very important applications (a) a high fidelity generating unit parameters estimation and (b) an energy function based transient stability monitoring of a wide area electric power system with predictive capability. Also the dynamic distributed state estimation results are stored (the storage scheme includes data and coincidental model) enabling an automatic reconstruction and “play back” of a system wide disturbance. This approach enables complete play back capability with fidelity equal to that of real time with the advantage of “playing back” at a user selected speed. The proposed technologies were developed and tested in the lab during the first 18 months of the project and then demonstrated on two actual systems, the USVI Water and Power Administration system and the New York Power Authority’s Blenheim-Gilboa pumped hydro plant in the last 18 months of the project. The four main thrusts of this project, mentioned above, are extremely important to the industry. The DSE with the achieved update rates (more than 60 times per second) provides a superior solution to the “grid visibility” question. The generator parameter identification method fills an important and practical need of the industry. The “energy function” based
Software Reliability Estimation of the Reactor Protection System for Lungmen Nuclear Power Station
Wang, Jung Ya; Chou, Hwai Pwu [Tsing Hua National University, Hsinchu (China)
2014-08-15
In this paper, a software reliability estimation method is applied to estimate the software reliability of the reactor protection system (RPS) for Lungmen ABWR. In order to estimate the software failure probability, a flow network model of software is constructed. The total number of executions and the execution time of each software statement are obtained, and the reliability of each statement is obtained. During the test, the one-time test scenario follows a Bernoulli distribution and the multiple-test scenarios follow a binomial distribution. The software reliability of the digital trip module (DTM) and the trip logic unit (TLU) of the RPS of Lungmen ABWR can then be estimated. The results show that the RPS software has a good reliability.
Adaptive distributed parameter and input estimation in linear parabolic PDEs
Mechhoud, Sarra
2016-01-01
In this paper, we discuss the on-line estimation of distributed source term, diffusion, and reaction coefficients of a linear parabolic partial differential equation using both distributed and interior-point measurements. First, new sufficient identifiability conditions of the input and the parameter simultaneous estimation are stated. Then, by means of Lyapunov-based design, an adaptive estimator is derived in the infinite-dimensional framework. It consists of a state observer and gradient-based parameter and input adaptation laws. The parameter convergence depends on the plant signal richness assumption, whereas the state convergence is established using a Lyapunov approach. The results of the paper are illustrated by simulation on tokamak plasma heat transport model using simulated data.
Parameters Estimation of Geographically Weighted Ordinal Logistic Regression (GWOLR) Model
Zuhdi, Shaifudin; Retno Sari Saputro, Dewi; Widyaningsih, Purnami
2017-06-01
A regression model is the representation of relationship between independent variable and dependent variable. The dependent variable has categories used in the logistic regression model to calculate odds on. The logistic regression model for dependent variable has levels in the logistics regression model is ordinal. GWOLR model is an ordinal logistic regression model influenced the geographical location of the observation site. Parameters estimation in the model needed to determine the value of a population based on sample. The purpose of this research is to parameters estimation of GWOLR model using R software. Parameter estimation uses the data amount of dengue fever patients in Semarang City. Observation units used are 144 villages in Semarang City. The results of research get GWOLR model locally for each village and to know probability of number dengue fever patient categories.
Interval Estimations of the Two-Parameter Exponential Distribution
Lai Jiang
2012-01-01
Full Text Available In applied work, the two-parameter exponential distribution gives useful representations of many physical situations. Confidence interval for the scale parameter and predictive interval for a future independent observation have been studied by many, including Petropoulos (2011 and Lawless (1977, respectively. However, interval estimates for the threshold parameter have not been widely examined in statistical literature. The aim of this paper is to, first, obtain the exact significance function of the scale parameter by renormalizing the p∗-formula. Then the approximate Studentization method is applied to obtain the significance function of the threshold parameter. Finally, a predictive density function of the two-parameter exponential distribution is derived. A real-life data set is used to show the implementation of the method. Simulation studies are then carried out to illustrate the accuracy of the proposed methods.
Baker Syed
2011-01-01
Full Text Available Abstract In systems biology, experimentally measured parameters are not always available, necessitating the use of computationally based parameter estimation. In order to rely on estimated parameters, it is critical to first determine which parameters can be estimated for a given model and measurement set. This is done with parameter identifiability analysis. A kinetic model of the sucrose accumulation in the sugar cane culm tissue developed by Rohwer et al. was taken as a test case model. What differentiates this approach is the integration of an orthogonal-based local identifiability method into the unscented Kalman filter (UKF, rather than using the more common observability-based method which has inherent limitations. It also introduces a variable step size based on the system uncertainty of the UKF during the sensitivity calculation. This method identified 10 out of 12 parameters as identifiable. These ten parameters were estimated using the UKF, which was run 97 times. Throughout the repetitions the UKF proved to be more consistent than the estimation algorithms used for comparison.
Baker, Syed Murtuza; Poskar, C Hart; Junker, Björn H
2011-10-11
In systems biology, experimentally measured parameters are not always available, necessitating the use of computationally based parameter estimation. In order to rely on estimated parameters, it is critical to first determine which parameters can be estimated for a given model and measurement set. This is done with parameter identifiability analysis. A kinetic model of the sucrose accumulation in the sugar cane culm tissue developed by Rohwer et al. was taken as a test case model. What differentiates this approach is the integration of an orthogonal-based local identifiability method into the unscented Kalman filter (UKF), rather than using the more common observability-based method which has inherent limitations. It also introduces a variable step size based on the system uncertainty of the UKF during the sensitivity calculation. This method identified 10 out of 12 parameters as identifiable. These ten parameters were estimated using the UKF, which was run 97 times. Throughout the repetitions the UKF proved to be more consistent than the estimation algorithms used for comparison.
Sedaqatvand, Ramin; Nasr Esfahany, Mohsen; Behzad, Tayebeh; Mohseni, Madjid; Mardanpour, Mohammad Mahdi
2013-10-01
In this study, for the first time, the conduction-based model is extended, and then combined with Genetic Algorithm to estimate the design parameters of a MFC treating dairy wastewater. The optimized parameters are, then, validated. The estimated half-saturation potential of -0.13 V (vs. SHE) is in good agreement while the biofilm conductivity of 8.76×10(-4) mS cm(-1) is three orders of magnitude lower than that previously-reported for pure-culture biofilm. Simulations show that the ohmic and concentration overpotentials contribute almost equally in dropping cell voltage in which the concentration film and biofilm conductivity comprise the main resistances, respectively. Thus, polarization analysis and determining the controlling steps will be possible through that developed extension. This study introduces a reliable method to estimate the design parameters of a particular MFC and to characterize it.
Parameter Estimation of Damped Compound Pendulum Differential Evolution Algorithm
Saad Mohd Sazli
2016-01-01
Full Text Available This paper present the parameter identification of damped compound pendulum using differential evolution algorithm. The procedure used to achieve the parameter identification of the experimental system consisted of input output data collection, ARX model order selection and parameter estimation using conventional method least square (LS and differential evolution (DE algorithm. PRBS signal is used to be input signal to regulate the motor speed. Whereas, the output signal is taken from position sensor. Both, input and output data is used to estimate the parameter of the ARX model. The residual error between the actual and predicted output responses of the models is validated using mean squares error (MSE. Analysis showed that, MSE value for LS is 0.0026 and MSE value for DE is 3.6601×10-5. Based results obtained, it was found that DE have lower MSE than the LS method.
Towards predictive food process models: A protocol for parameter estimation.
Vilas, Carlos; Arias-Méndez, Ana; Garcia, Miriam R; Alonso, Antonio A; Balsa-Canto, E
2016-05-31
Mathematical models, in particular, physics-based models, are essential tools to food product and process design, optimization and control. The success of mathematical models relies on their predictive capabilities. However, describing physical, chemical and biological changes in food processing requires the values of some, typically unknown, parameters. Therefore, parameter estimation from experimental data is critical to achieving desired model predictive properties. This work takes a new look into the parameter estimation (or identification) problem in food process modeling. First, we examine common pitfalls such as lack of identifiability and multimodality. Second, we present the theoretical background of a parameter identification protocol intended to deal with those challenges. And, to finish, we illustrate the performance of the proposed protocol with an example related to the thermal processing of packaged foods.
Estimation of distances to stars with stellar parameters from LAMOST
Carlin, Jeffrey L; Newberg, Heidi Jo; Beers, Timothy C; Chen, Li; Deng, Licai; Guhathakurta, Puragra; Hou, Jinliang; Hou, Yonghui; Lepine, Sebastien; Li, Guangwei; Luo, A-Li; Smith, Martin C; Wu, Yue; Yang, Ming; Yanny, Brian; Zhang, Haotong; Zheng, Zheng
2015-01-01
We present a method to estimate distances to stars with spectroscopically derived stellar parameters. The technique is a Bayesian approach with likelihood estimated via comparison of measured parameters to a grid of stellar isochrones, and returns a posterior probability density function for each star's absolute magnitude. This technique is tailored specifically to data from the Large Sky Area Multi-object Fiber Spectroscopic Telescope (LAMOST) survey. Because LAMOST obtains roughly 3000 stellar spectra simultaneously within each ~5-degree diameter "plate" that is observed, we can use the stellar parameters of the observed stars to account for the stellar luminosity function and target selection effects. This removes biasing assumptions about the underlying populations, both due to predictions of the luminosity function from stellar evolution modeling, and from Galactic models of stellar populations along each line of sight. Using calibration data of stars with known distances and stellar parameters, we show ...
Parameter Estimation of Photovoltaic Models via Cuckoo Search
Jieming Ma
2013-01-01
Full Text Available Since conventional methods are incapable of estimating the parameters of Photovoltaic (PV models with high accuracy, bioinspired algorithms have attracted significant attention in the last decade. Cuckoo Search (CS is invented based on the inspiration of brood parasitic behavior of some cuckoo species in combination with the Lévy flight behavior. In this paper, a CS-based parameter estimation method is proposed to extract the parameters of single-diode models for commercial PV generators. Simulation results and experimental data show that the CS algorithm is capable of obtaining all the parameters with extremely high accuracy, depicted by a low Root-Mean-Squared-Error (RMSE value. The proposed method outperforms other algorithms applied in this study.
Accurate parameter estimation for unbalanced three-phase system.
Chen, Yuan; So, Hing Cheung
2014-01-01
Smart grid is an intelligent power generation and control console in modern electricity networks, where the unbalanced three-phase power system is the commonly used model. Here, parameter estimation for this system is addressed. After converting the three-phase waveforms into a pair of orthogonal signals via the α β-transformation, the nonlinear least squares (NLS) estimator is developed for accurately finding the frequency, phase, and voltage parameters. The estimator is realized by the Newton-Raphson scheme, whose global convergence is studied in this paper. Computer simulations show that the mean square error performance of NLS method can attain the Cramér-Rao lower bound. Moreover, our proposal provides more accurate frequency estimation when compared with the complex least mean square (CLMS) and augmented CLMS.
Parameter Estimation of the Extended Vasiček Model
Sanae RUJIVAN
2010-01-01
Full Text Available In this paper, an estimate of the drift and diffusion parameters of the extended Vasiček model is presented. The estimate is based on the method of maximum likelihood. We derive a closed-form expansion for the transition (probability density of the extended Vasiček process and use the expansion to construct an approximate log-likelihood function of a discretely sampled data of the process. Approximate maximum likelihood estimators (AMLEs of the parameters are obtained by maximizing the approximate log-likelihood function. The convergence of the AMLEs to the true maximum likelihood estimators is obtained by increasing the number of terms in the expansions with a small time step size.
Estimating the reliability of eyewitness identifications from police lineups.
Wixted, John T; Mickes, Laura; Dunn, John C; Clark, Steven E; Wells, William
2016-01-12
Laboratory-based mock crime studies have often been interpreted to mean that (i) eyewitness confidence in an identification made from a lineup is a weak indicator of accuracy and (ii) sequential lineups are diagnostically superior to traditional simultaneous lineups. Largely as a result, juries are increasingly encouraged to disregard eyewitness confidence, and up to 30% of law enforcement agencies in the United States have adopted the sequential procedure. We conducted a field study of actual eyewitnesses who were assigned to simultaneous or sequential photo lineups in the Houston Police Department over a 1-y period. Identifications were made using a three-point confidence scale, and a signal detection model was used to analyze and interpret the results. Our findings suggest that (i) confidence in an eyewitness identification from a fair lineup is a highly reliable indicator of accuracy and (ii) if there is any difference in diagnostic accuracy between the two lineup formats, it likely favors the simultaneous procedure.
Accuracy of Parameter Estimation in Gibbs Sampling under the Two-Parameter Logistic Model.
Kim, Seock-Ho; Cohen, Allan S.
The accuracy of Gibbs sampling, a Markov chain Monte Carlo procedure, was considered for estimation of item and ability parameters under the two-parameter logistic model. Memory test data were analyzed to illustrate the Gibbs sampling procedure. Simulated data sets were analyzed using Gibbs sampling and the marginal Bayesian method. The marginal…
无
2009-01-01
There are two kinds of methods in researching the crust deformation: geophysical method and geometrical (or observational) method. Considerable differences usually exist between the two kinds of results, because of the datum differences, geophysical model errors, observational model errors, and so on. Thus, it is reasonable to combine the two kinds of information to collect the crust deformation information. To use the reliable geometrical and geophysical information, we have to control the observational and geophysical model error influences on the estimated deformation parameters, and to balance their contributions to the evaluated parameters. A hybrid estimation strategy is proposed here for evaluating the deformation parameters employing an adaptively robust filtering. The effects of measurement outliers on the estimated parameters are controlled by robust equivalent weights. Adaptive factors are introduced to balance the contribution of the geophysical model information and the geometrical measurements to the model parameters. The datum for the local deformation analysis is mainly determined by the highly accurate IGS station velocities. The hybrid estimation strategy is applied in an actual GPS monitoring network. It is shown that the hybrid technique employs locally repeated geometrical displacements to reduce the displacement errors caused by the mis-modeling of geophysical technique, and thus improves the precision of the estimated crust deformation parameters.
Estimation of Thomsen’s anisotropic parameters from walkaway VSP and applications
Liu Yi-Mou; Liang Xiang-Hao; Yin Xing-Yao; Zhou Yi; Li Yan-Peng
2014-01-01
Estimation of Thomsen’s anisotropic parameters is very important for accurate time-to-depth conversion and depth migration data processing. Compared with other methods, it is much easier and more reliable to estimate anisotropic parameters that are required for surface seismic depth imaging from vertical seismic profile (VSP) data, because the first arrivals of VSP data can be picked with much higher accuracy. In this study, we developed a method for estimating Thomsen’s P-wave anisotropic parameters in VTI media using the first arrivals from walkaway VSP data. Model first-arrival travel times are calculated on the basis of the near-offset normal moveout correction velocity in VTI media and ray tracing using Thomsen’s P-wave velocity approximation. Then, the anisotropic parameters δ and ε are determined by minimizing the difference between the calculated and observed travel times for the near and far offsets. Numerical forward modeling, using the proposed method indicates that errors between the estimated and measured anisotropic parameters are small. Using field data from an eight-azimuth walkaway VSP in Tarim Basin, we estimated the parametersδandεand built an anisotropic depth-velocity model for prestack depth migration processing of surface 3D seismic data. The results show improvement in imaging the carbonate reservoirs and minimizing the depth errors of the geological targets.
Parameter estimation of stable distribution based on zero - order statistics
Chen, Jian; Chen, Hong; Cai, Xiaoxia; Weng, Pengfei; Nie, Hao
2017-08-01
With the increasing complexity of the channel, there are many impulse noise signals in the real channel. The statistical properties of such processes are significantly deviated from the Gaussian distribution, and the Alpha stable distribution provides a very useful theoretical tool for this process. This paper focuses on the parameter estimation method of the Alpha stable distribution. First, the basic theory of Alpha stable distribution is introduced. Then, the concept of logarithmic moment and geometric power are proposed. Finally, the parameter estimation of Alpha stable distribution is realized based on zero order statistic (ZOS). This method has better toughness and precision.
Estimation of octanol/water partition coefficients using LSER parameters
Luehrs, Dean C.; Hickey, James P.; Godbole, Kalpana A.; Rogers, Tony N.
1998-01-01
The logarithms of octanol/water partition coefficients, logKow, were regressed against the linear solvation energy relationship (LSER) parameters for a training set of 981 diverse organic chemicals. The standard deviation for logKow was 0.49. The regression equation was then used to estimate logKow for a test of 146 chemicals which included pesticides and other diverse polyfunctional compounds. Thus the octanol/water partition coefficient may be estimated by LSER parameters without elaborate software but only moderate accuracy should be expected.
Parameter estimation of an aeroelastic aircraft using neural networks
S C Raisinghani; A K Ghosh
2000-04-01
Application of neural networks to the problem of aerodynamic modelling and parameter estimation for aeroelastic aircraft is addressed. A neural model capable of predicting generalized force and moment coefficients using measured motion and control variables only, without any need for conventional normal elastic variables ortheirtime derivatives, is proposed. Furthermore, it is shown that such a neural model can be used to extract equivalent stability and control derivatives of a flexible aircraft. Results are presented for aircraft with different levels of flexibility to demonstrate the utility ofthe neural approach for both modelling and estimation of parameters.
Parameter Estimation in Stochastic Differential Equations; An Overview
Nielsen, Jan Nygaard; Madsen, Henrik; Young, P. C.
2000-01-01
This paper presents an overview of the progress of research on parameter estimation methods for stochastic differential equations (mostly in the sense of Ito calculus) over the period 1981-1999. These are considered both without measurement noise and with measurement noise, where the discretely...... observed stochastic differential equations are embedded in a continuous-discrete time state space model. Every attempts has been made to include results from other scientific disciplines. Maximum likelihood estimation of parameters in nonlinear stochastic differential equations is in general not possible...
Estimation of regional pulmonary perfusion parameters from microfocal angiograms
Clough, Anne V.; Al-Tinawi, Amir; Linehan, John H.; Dawson, Christopher A.
1995-05-01
An important application of functional imaging is the estimation of regional blood flow and volume using residue detection of vascular indicators. An indicator-dilution model applicable to tissue regions distal from the inlet site was developed. Theoretical methods for determining regional blood flow, volume, and mean transit time parameters from time-absorbance curves arise from this model. The robustness of the parameter estimation methods was evaluated using a computer-simulated vessel network model. Flow through arterioles, networks of capillaries, and venules was simulated. Parameter identification and practical implementation issues were addressed. The shape of the inlet concentration curve and moderate amounts of random noise did not effect the ability of the method to recover accurate parameter estimates. The parameter estimates degraded in the presence of significant dispersion of the measured inlet concentration curve as it traveled through arteries upstream from the microvascular region. The methods were applied to image data obtained using microfocal x-ray angiography to study the pulmonary microcirculation. Time- absorbance curves were acquired from a small feeding artery, the surrounding microvasculature and a draining vein of an isolated dog lung as contrast material passed through the field-of-view. Changes in regional microvascular volume were determined from these curves.
Parameter Estimation for Single Diode Models of Photovoltaic Modules
Hansen, Clifford [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Photovoltaic and Distributed Systems Integration Dept.
2015-03-01
Many popular models for photovoltaic system performance employ a single diode model to compute the I - V curve for a module or string of modules at given irradiance and temperature conditions. A single diode model requires a number of parameters to be estimated from measured I - V curves. Many available parameter estimation methods use only short circuit, o pen circuit and maximum power points for a single I - V curve at standard test conditions together with temperature coefficients determined separately for individual cells. In contrast, module testing frequently records I - V curves over a wide range of irradi ance and temperature conditions which, when available , should also be used to parameterize the performance model. We present a parameter estimation method that makes use of a fu ll range of available I - V curves. We verify the accuracy of the method by recov ering known parameter values from simulated I - V curves . We validate the method by estimating model parameters for a module using outdoor test data and predicting the outdoor performance of the module.
Parameter Estimation Technique of Nonlinear Prosthetic Hand System
M.H.Jali
2016-10-01
Full Text Available This paper illustrated the parameter estimation technique of motorized prosthetic hand system. Prosthetic hands have become importance device to help amputee to gain a normal functional hand. By integrating various types of actuators such as DC motor, hydraulic and pneumatic as well as mechanical part, a highly useful and functional prosthetic device can be produced. One of the first steps to develop a prosthetic device is to design a control system. Mathematical modeling is derived to ease the control design process later on. This paper explained the parameter estimation technique of a nonlinear dynamic modeling of the system using Lagrangian equation. The model of the system is derived by considering the energies of the finger when it is actuated by the DC motor. The parameter estimation technique is implemented using Simulink Design Optimization toolbox in MATLAB. All the parameters are optimized until it achieves a satisfactory output response. The results show that the output response of the system with parameter estimation value produces a better response compare to the default value
Robust Nonlinear Regression in Enzyme Kinetic Parameters Estimation
Maja Marasović
2017-01-01
Full Text Available Accurate estimation of essential enzyme kinetic parameters, such as Km and Vmax, is very important in modern biology. To this date, linearization of kinetic equations is still widely established practice for determining these parameters in chemical and enzyme catalysis. Although simplicity of linear optimization is alluring, these methods have certain pitfalls due to which they more often then not result in misleading estimation of enzyme parameters. In order to obtain more accurate predictions of parameter values, the use of nonlinear least-squares fitting techniques is recommended. However, when there are outliers present in the data, these techniques become unreliable. This paper proposes the use of a robust nonlinear regression estimator based on modified Tukey’s biweight function that can provide more resilient results in the presence of outliers and/or influential observations. Real and synthetic kinetic data have been used to test our approach. Monte Carlo simulations are performed to illustrate the efficacy and the robustness of the biweight estimator in comparison with the standard linearization methods and the ordinary least-squares nonlinear regression. We then apply this method to experimental data for the tyrosinase enzyme (EC 1.14.18.1 extracted from Solanum tuberosum, Agaricus bisporus, and Pleurotus ostreatus. The results on both artificial and experimental data clearly show that the proposed robust estimator can be successfully employed to determine accurate values of Km and Vmax.
Estimation of the parameters of ETAS models by Simulated Annealing
Lombardi, Anna Maria
2015-01-01
This paper proposes a new algorithm to estimate the maximum likelihood parameters of an Epidemic Type Aftershock Sequences (ETAS) model. It is based on Simulated Annealing, a versatile method that solves problems of global optimization and ensures convergence to a global optimum. The procedure is tested on both simulated and real catalogs. The main conclusion is that the method performs poorly as the size of the catalog decreases because the effect of the correlation of the ETAS parameters is...
CADLIVE optimizer: web-based parameter estimation for dynamic models
Inoue Kentaro
2012-08-01
Full Text Available Abstract Computer simulation has been an important technique to capture the dynamics of biochemical networks. In most networks, however, few kinetic parameters have been measured in vivo because of experimental complexity. We develop a kinetic parameter estimation system, named the CADLIVE Optimizer, which comprises genetic algorithms-based solvers with a graphical user interface. This optimizer is integrated into the CADLIVE Dynamic Simulator to attain efficient simulation for dynamic models.
Human ECG signal parameters estimation during controlled physical activity
Maciejewski, Marcin; Surtel, Wojciech; Dzida, Grzegorz
2015-09-01
ECG signal parameters are commonly used indicators of human health condition. In most cases the patient should remain stationary during the examination to decrease the influence of muscle artifacts. During physical activity, the noise level increases significantly. The ECG signals were acquired during controlled physical activity on a stationary bicycle and during rest. Afterwards, the signals were processed using a method based on Pan-Tompkins algorithms to estimate their parameters and to test the method.
Iterative Smooth Variable Structure Filter for Parameter Estimation
Mohammad Al-Shabi; Saeid Habibi
2011-01-01
The smooth variable structure filter (SVSF) is a recently proposed predictor-corrector filter for state and parameter estimation. The SVSF is based on the sliding mode control concept. It defines a hyperplane in terms of the state trajectory and then applies a discontinuous corrective action that forces the estimate to go back and forth across that hyperplane. The SVSF is robust and stable to modeling uncertainties making it suitable for fault detection application. The discontinuous action o...
Targeted estimation of nuisance parameters to obtain valid statistical inference.
van der Laan, Mark J
2014-01-01
In order to obtain concrete results, we focus on estimation of the treatment specific mean, controlling for all measured baseline covariates, based on observing independent and identically distributed copies of a random variable consisting of baseline covariates, a subsequently assigned binary treatment, and a final outcome. The statistical model only assumes possible restrictions on the conditional distribution of treatment, given the covariates, the so-called propensity score. Estimators of the treatment specific mean involve estimation of the propensity score and/or estimation of the conditional mean of the outcome, given the treatment and covariates. In order to make these estimators asymptotically unbiased at any data distribution in the statistical model, it is essential to use data-adaptive estimators of these nuisance parameters such as ensemble learning, and specifically super-learning. Because such estimators involve optimal trade-off of bias and variance w.r.t. the infinite dimensional nuisance parameter itself, they result in a sub-optimal bias/variance trade-off for the resulting real-valued estimator of the estimand. We demonstrate that additional targeting of the estimators of these nuisance parameters guarantees that this bias for the estimand is second order and thereby allows us to prove theorems that establish asymptotic linearity of the estimator of the treatment specific mean under regularity conditions. These insights result in novel targeted minimum loss-based estimators (TMLEs) that use ensemble learning with additional targeted bias reduction to construct estimators of the nuisance parameters. In particular, we construct collaborative TMLEs (C-TMLEs) with known influence curve allowing for statistical inference, even though these C-TMLEs involve variable selection for the propensity score based on a criterion that measures how effective the resulting fit of the propensity score is in removing bias for the estimand. As a particular special
On optimum parameter modulation-estimation from a large deviations perspective
Merhav, Neri
2012-01-01
We consider the problem of jointly optimum modulation and estimation of a real-valued random parameter, conveyed over an additive white Gaussian noise (AWGN) channel, where the performance metric is the large deviations behavior of the estimator, namely, the exponential decay rate (as a function of the observation time) of the probability that the estimation error would exceed a certain threshold. Our basic result is in providing an exact characterization of the fastest achievable exponential decay rate, among all possible modulator-estimator (transmitter-receiver) pairs, where the modulator is limited only in the signal power, but not in bandwidth. This exponential rate turns out to be given by the reliability function of the AWGN channel. We also discuss several ways to achieve this optimum performance, and one of them is based on quantization of the parameter, followed by optimum channel coding and modulation, which gives rise to a separation-based transmitter, if one views this setting from the perspectiv...
Bayesian estimation of parameters in a regional hydrological model
K. Engeland
2002-01-01
Full Text Available This study evaluates the applicability of the distributed, process-oriented Ecomag model for prediction of daily streamflow in ungauged basins. The Ecomag model is applied as a regional model to nine catchments in the NOPEX area, using Bayesian statistics to estimate the posterior distribution of the model parameters conditioned on the observed streamflow. The distribution is calculated by Markov Chain Monte Carlo (MCMC analysis. The Bayesian method requires formulation of a likelihood function for the parameters and three alternative formulations are used. The first is a subjectively chosen objective function that describes the goodness of fit between the simulated and observed streamflow, as defined in the GLUE framework. The second and third formulations are more statistically correct likelihood models that describe the simulation errors. The full statistical likelihood model describes the simulation errors as an AR(1 process, whereas the simple model excludes the auto-regressive part. The statistical parameters depend on the catchments and the hydrological processes and the statistical and the hydrological parameters are estimated simultaneously. The results show that the simple likelihood model gives the most robust parameter estimates. The simulation error may be explained to a large extent by the catchment characteristics and climatic conditions, so it is possible to transfer knowledge about them to ungauged catchments. The statistical models for the simulation errors indicate that structural errors in the model are more important than parameter uncertainties. Keywords: regional hydrological model, model uncertainty, Bayesian analysis, Markov Chain Monte Carlo analysis
Bayesian parameter estimation in spectral quantitative photoacoustic tomography
Pulkkinen, Aki; Cox, Ben T.; Arridge, Simon R.; Kaipio, Jari P.; Tarvainen, Tanja
2016-03-01
Photoacoustic tomography (PAT) is an imaging technique combining strong contrast of optical imaging to high spatial resolution of ultrasound imaging. These strengths are achieved via photoacoustic effect, where a spatial absorption of light pulse is converted into a measurable propagating ultrasound wave. The method is seen as a potential tool for small animal imaging, pre-clinical investigations, study of blood vessels and vasculature, as well as for cancer imaging. The goal in PAT is to form an image of the absorbed optical energy density field via acoustic inverse problem approaches from the measured ultrasound data. Quantitative PAT (QPAT) proceeds from these images and forms quantitative estimates of the optical properties of the target. This optical inverse problem of QPAT is illposed. To alleviate the issue, spectral QPAT (SQPAT) utilizes PAT data formed at multiple optical wavelengths simultaneously with optical parameter models of tissue to form quantitative estimates of the parameters of interest. In this work, the inverse problem of SQPAT is investigated. Light propagation is modelled using the diffusion equation. Optical absorption is described with chromophore concentration weighted sum of known chromophore absorption spectra. Scattering is described by Mie scattering theory with an exponential power law. In the inverse problem, the spatially varying unknown parameters of interest are the chromophore concentrations, the Mie scattering parameters (power law factor and the exponent), and Gruneisen parameter. The inverse problem is approached with a Bayesian method. It is numerically demonstrated, that estimation of all parameters of interest is possible with the approach.
Empirical Study of Travel Time Estimation and Reliability
Ruimin Li; Huajun Chai; Jin Tang
2013-01-01
This paper explores the travel time distribution of different types of urban roads, the link and path average travel time, and variance estimation methods by analyzing the large-scale travel time dataset detected from automatic number plate readers installed throughout Beijing. The results show that the best-fitting travel time distribution for different road links in 15 min time intervals differs for different traffic congestion levels. The average travel time for all links on all days can b...
Reliability of panoramic radiography in chronological age estimation
Ramanpal Singh Makkad
2013-01-01
Full Text Available Introduction: There has been a strong relationship between the growth rate of bone and teeth, which can be utilized for the purpose of age identification of an individual. Aims and Objective: The present study was designed to determine the relationship between the dental age, the age from dental panoramic radiography, skeletal age, and chronological age. Materials and Methods: The study included 270 individuals, averaging between 17 years and 25 years of age from out-patient department of New Horizon Dental College and Hospital, Sakri, Bilaspur, Chhattisgarh, India, for third molar surgery. Panoramic and hand wrist radiographs were taken, the films were digitally processed for visualization of the wisdom teeth. The confirmations of ages were repeated again at an interval of 4 weeks by a radiologist. The extracted wisdom teeth were placed in 10% formalin and were examined by one dental surgeon to estimate the age on the basis of root formation. Student′s t-test was adopted for statistical analysis and probability (P value was calculated. Conclusion: Estimating the age of an individual was accurate by examining extracted third molar. Age estimation through panoramic radiography was highly accurate in upper right quadrant (mean = 0.72 and P = 0.077.
Jianwei Yang
2016-06-01
Full Text Available In order to solve the reliability assessment of braking system component of high-speed electric multiple units, this article, based on two-parameter exponential distribution, provides the maximum likelihood estimation and Bayes estimation under a type-I life test. First of all, we evaluate the failure probability value according to the classical estimation method and then obtain the maximum likelihood estimation of parameters of two-parameter exponential distribution by performing and using the modified likelihood function. On the other hand, based on Bayesian theory, this article also selects the beta and gamma distributions as the prior distribution, combines with the modified maximum likelihood function, and innovatively applies a Markov chain Monte Carlo algorithm to parameters assessment based on Bayes estimation method for two-parameter exponential distribution, so that two reliability mathematical models of the electromagnetic valve are obtained. Finally, through type-I life test, the failure rates according to maximum likelihood estimation and Bayes estimation method based on Markov chain Monte Carlo algorithm are, respectively, 2.650 × 10−5 and 3.037 × 10−5. Compared with the failure rate of a electromagnetic valve 3.005 × 10−5, it proves that the Bayes method can use a Markov chain Monte Carlo algorithm to estimate reliability for two-parameter exponential distribution and Bayes estimation is more closer to the value of electromagnetic valve. So, by fully integrating multi-source, Bayes estimation method can preferably modify and precisely estimate the parameters, which can provide a certain theoretical basis for the safety operation of high-speed electric multiple units.
Key Parameters Estimation and Adaptive Warning Strategy for Rear-End Collision of Vehicle
Xiang Song
2015-01-01
Full Text Available The rear-end collision warning system requires reliable warning decision mechanism to adapt the actual driving situation. To overcome the shortcomings of existing warning methods, an adaptive strategy is proposed to address the practical aspects of the collision warning problem. The proposed strategy is based on the parameter-adaptive and variable-threshold approaches. First, several key parameter estimation algorithms are developed to provide more accurate and reliable information for subsequent warning method. They include a two-stage algorithm which contains a Kalman filter and a Luenberger observer for relative acceleration estimation, a Bayesian theory-based algorithm of estimating the road friction coefficient, and an artificial neural network for estimating the driver’s reaction time. Further, the variable-threshold warning method is designed to achieve the global warning decision. In the method, the safety distance is employed to judge the dangerous state. The calculation method of the safety distance in this paper can be adaptively adjusted according to the different driving conditions of the leading vehicle. Due to the real-time estimation of the key parameters and the adaptive calculation of the warning threshold, the strategy can adapt to various road and driving conditions. Finally, the proposed strategy is evaluated through simulation and field tests. The experimental results validate the feasibility and effectiveness of the proposed strategy.
Revisiting Boltzmann learning: parameter estimation in Markov random fields
Hansen, Lars Kai; Andersen, Lars Nonboe; Kjems, Ulrik
1996-01-01
This article presents a generalization of the Boltzmann machine that allows us to use the learning rule for a much wider class of maximum likelihood and maximum a posteriori problems, including both supervised and unsupervised learning. Furthermore, the approach allows us to discuss regularization...... and generalization in the context of Boltzmann machines. We provide an illustrative example concerning parameter estimation in an inhomogeneous Markov field. The regularized adaptation produces a parameter set that closely resembles the “teacher” parameters, hence, will produce segmentations that closely reproduce...
Parameter identification and slip estimation of induction machine
Orman, Maciej; Orkisz, Michal; Pinto, Cajetan T.
2011-05-01
This paper presents a newly developed algorithm for induction machine rotor speed estimation and parameter detection. The proposed algorithm is based on spectrum analysis of the stator current. The main idea is to find the best fit of motor parameters and rotor slip with the group of characteristic frequencies which are always present in the current spectrum. Rotor speed and parameters such as pole pairs or number of rotor slots are the results of the presented algorithm. Numerical calculations show that the method yields very accurate results and can be an important part of machine monitoring systems.
Parameter Estimation in Stochastic Grey-Box Models
Kristensen, Niels Rode; Madsen, Henrik; Jørgensen, Sten Bay
2004-01-01
An efficient and flexible parameter estimation scheme for grey-box models in the sense of discretely, partially observed Ito stochastic differential equations with measurement noise is presented along with a corresponding software implementation. The estimation scheme is based on the extended...... Kalman filter and features maximum likelihood as well as maximum a posteriori estimation on multiple independent data sets, including irregularly sampled data sets and data sets with occasional outliers and missing observations. The software implementation is compared to an existing software tool...
Low Complexity Parameter Estimation For Off-the-Grid Targets
Jardak, Seifallah
2015-10-05
In multiple-input multiple-output radar, to estimate the reflection coefficient, spatial location, and Doppler shift of a target, a derived cost function is usually evaluated and optimized over a grid of points. The performance of such algorithms is directly affected by the size of the grid: increasing the number of points will enhance the resolution of the algorithm but exponentially increase its complexity. In this work, to estimate the parameters of a target, a reduced complexity super resolution algorithm is proposed. For off-the-grid targets, it uses a low order two dimensional fast Fourier transform to determine a suboptimal solution and then an iterative algorithm to jointly estimate the spatial location and Doppler shift. Simulation results show that the mean square estimation error of the proposed estimators achieve the Cram\\'er-Rao lower bound. © 2015 IEEE.
Online vegetation parameter estimation using passive microwave remote sensing observations
In adaptive system identification the Kalman filter can be used to identify the coefficient of the observation operator of a linear system. Here the ensemble Kalman filter is tested for adaptive online estimation of the vegetation opacity parameter of a radiative transfer model. A state augmentatio...
On Modal Parameter Estimates from Ambient Vibration Tests
Agneni, A.; Brincker, Rune; Coppotelli, B.
2004-01-01
Modal parameter estimates from ambient vibration testing are turning into the preferred technique when one is interested in systems under actual loadings and operational conditions. Moreover, with this approach, expensive devices to excite the structure are not needed, since it can be adequately...
Cubic spline approximation techniques for parameter estimation in distributed systems
Banks, H. T.; Crowley, J. M.; Kunisch, K.
1983-01-01
Approximation schemes employing cubic splines in the context of a linear semigroup framework are developed for both parabolic and hyperbolic second-order partial differential equation parameter estimation problems. Convergence results are established for problems with linear and nonlinear systems, and a summary of numerical experiments with the techniques proposed is given.
Estimation of coal quality parameters using disjunctive kriging
Tercan, A.E. [Hacettepe University, Department of Mining Engineering, Beytepe (Turkey)
1998-07-01
Disjunctive kriging is a nonlinear estimation technique that allows the conditional probability that the value of coal quality parameter is greater than a cutoff value. The method can be used in management decision making to help control blending and make coal quality sampling. The use of disjunctive kriging is illustrated using the data from Kangal coal deposit. 7 refs.
IRT parameter estimation with response times as collateral information
Linden, W.J. van der; RKlein Entink, R.H.; Fox, J.-P.
2010-01-01
Hierarchical modeling of responses and response times on test items facilitates the use of response times as collateral information in the estimation of the response parameters. In addition to the regular information in the response data, two sources of collateral information are identified: (a) the
Parameter Estimates in Differential Equation Models for Population Growth
Winkel, Brian J.
2011-01-01
We estimate the parameters present in several differential equation models of population growth, specifically logistic growth models and two-species competition models. We discuss student-evolved strategies and offer "Mathematica" code for a gradient search approach. We use historical (1930s) data from microbial studies of the Russian biologist,…
A Sparse Bayesian Learning Algorithm With Dictionary Parameter Estimation
Hansen, Thomas Lundgaard; Badiu, Mihai Alin; Fleury, Bernard Henri
2014-01-01
This paper concerns sparse decomposition of a noisy signal into atoms which are specified by unknown continuous-valued parameters. An example could be estimation of the model order, frequencies and amplitudes of a superposition of complex sinusoids. The common approach is to reduce the continuous...
Visco-piezo-elastic parameter estimation in laminated plate structures
Araujo, A. L.; Mota Soares, C. M.; Herskovits, J.;
2009-01-01
A parameter estimation technique is presented in this article, for identification of elastic, piezoelectric and viscoelastic properties of active laminated composite plates with surface-bonded piezoelectric patches. The inverse method presented uses experimental data in the form of a set of measu...
Rome Keith
2011-03-01
Full Text Available Abstract Background A clinical study was conducted to determine the intra and inter-rater reliability of digital scanning and the neutral suspension casting technique to measure six foot parameters. The neutral suspension casting technique is a commonly utilised method for obtaining a negative impression of the foot prior to orthotic fabrication. Digital scanning offers an alternative to the traditional plaster of Paris techniques. Methods Twenty one healthy participants volunteered to take part in the study. Six casts and six digital scans were obtained from each participant by two raters of differing clinical experience. The foot parameters chosen for investigation were cast length (mm, forefoot width (mm, rearfoot width (mm, medial arch height (mm, lateral arch height (mm and forefoot to rearfoot alignment (degrees. Intraclass correlation coefficients (ICC with 95% confidence intervals (CI were calculated to determine the intra and inter-rater reliability. Measurement error was assessed through the calculation of the standard error of the measurement (SEM and smallest real difference (SRD. Results ICC values for all foot parameters using digital scanning ranged between 0.81-0.99 for both intra and inter-rater reliability. For neutral suspension casting technique inter-rater reliability values ranged from 0.57-0.99 and intra-rater reliability values ranging from 0.36-0.99 for rater 1 and 0.49-0.99 for rater 2. Conclusions The findings of this study indicate that digital scanning is a reliable technique, irrespective of clinical experience, with reduced measurement variability in all foot parameters investigated when compared to neutral suspension casting.
Cosmological parameter estimation with free-form primordial power spectrum
Hazra, Dhiraj Kumar; Souradeep, Tarun
2013-01-01
Constraints on the main cosmological parameters using CMB or large scale structure data are usually based on power-law assumption of the primordial power spectrum (PPS). However, in the absence of a preferred model for the early universe, this raises a concern that current cosmological parameter estimates are strongly prejudiced by the assumed power-law form of PPS. In this paper, for the first time, we perform cosmological parameter estimation allowing the free form of the primordial spectrum. This is in fact the most general approach to estimate cosmological parameters without assuming any particular form for the primordial spectrum. We use direct reconstruction of the PPS for any point in the cosmological parameter space using recently modified Richardson-Lucy algorithm however other alternative reconstruction methods could be used for this purpose as well. We use WMAP 9 year data in our analysis considering CMB lensing effect and we report, for the first time, that the flat spatial universe with no cosmol...
Effect of noncircularity of experimental beam on CMB parameter estimation
Das, Santanu; Mitra, Sanjit; Tabitha Paulson, Sonu
2015-03-01
Measurement of Cosmic Microwave Background (CMB) anisotropies has been playing a lead role in precision cosmology by providing some of the tightest constrains on cosmological models and parameters. However, precision can only be meaningful when all major systematic effects are taken into account. Non-circular beams in CMB experiments can cause large systematic deviation in the angular power spectrum, not only by modifying the measurement at a given multipole, but also introducing coupling between different multipoles through a deterministic bias matrix. Here we add a mechanism for emulating the effect of a full bias matrix to the PLANCK likelihood code through the parameter estimation code SCoPE. We show that if the angular power spectrum was measured with a non-circular beam, the assumption of circular Gaussian beam or considering only the diagonal part of the bias matrix can lead to huge error in parameter estimation. We demonstrate that, at least for elliptical Gaussian beams, use of scalar beam window functions obtained via Monte Carlo simulations starting from a fiducial spectrum, as implemented in PLANCK analyses for example, leads to only few percent of sigma deviation of the best-fit parameters. However, we notice more significant differences in the posterior distributions for some of the parameters, which would in turn lead to incorrect errorbars. These differences can be reduced, so that the errorbars match within few percent, by adding an iterative reanalysis step, where the beam window function would be recomputed using the best-fit spectrum estimated in the first step.
Tsunami Prediction and Earthquake Parameters Estimation in the Red Sea
Sawlan, Zaid A
2012-12-01
Tsunami concerns have increased in the world after the 2004 Indian Ocean tsunami and the 2011 Tohoku tsunami. Consequently, tsunami models have been developed rapidly in the last few years. One of the advanced tsunami models is the GeoClaw tsunami model introduced by LeVeque (2011). This model is adaptive and consistent. Because of different sources of uncertainties in the model, observations are needed to improve model prediction through a data assimilation framework. Model inputs are earthquake parameters and topography. This thesis introduces a real-time tsunami forecasting method that combines tsunami model with observations using a hybrid ensemble Kalman filter and ensemble Kalman smoother. The filter is used for state prediction while the smoother operates smoothing to estimate the earthquake parameters. This method reduces the error produced by uncertain inputs. In addition, state-parameter EnKF is implemented to estimate earthquake parameters. Although number of observations is small, estimated parameters generates a better tsunami prediction than the model. Methods and results of prediction experiments in the Red Sea are presented and the prospect of developing an operational tsunami prediction system in the Red Sea is discussed.
Estimation of rice biophysical parameters using multitemporal RADARSAT-2 images
Li, S.; Ni, P.; Cui, G.; He, P.; Liu, H.; Li, L.; Liang, Z.
2016-04-01
Compared with optical sensors, synthetic aperture radar (SAR) has the capability of acquiring images in all-weather conditions. Thus, SAR images are suitable for using in rice growth regions that are characterized by frequent cloud cover and rain. The objective of this paper was to evaluate the probability of rice biophysical parameters estimation using multitemporal RADARSAT-2 images, and to develop the estimation models. Three RADARSTA-2 images were acquired during the rice critical growth stages in 2014 near Meishan, Sichuan province, Southwest China. Leaf area index (LAI), the fraction of photosynthetically active radiation (FPAR), height, biomass and canopy water content (WC) were observed at 30 experimental plots over 5 periods. The relationship between RADARSAT-2 backscattering coefficients (σ 0) or their ratios and rice biophysical parameters were analysed. These biophysical parameters were significantly and consistently correlated with the VV and VH σ 0 ratio (σ 0 VV/ σ 0 VH) throughout all growth stages. The regression model were developed between biophysical parameters and σ 0 VV/ σ 0 VH. The results suggest that the RADARSAT-2 data has great potential capability for the rice biophysical parameters estimation and the timely rice growth monitoring.
PARAMETER ESTIMATION METHODOLOGY FOR NONLINEAR SYSTEMS: APPLICATION TO INDUCTION MOTOR
G.KENNE; F.FLORET; H.NKWAWO; F.LAMNABHI-LAGARRIGUE
2005-01-01
This paper deals with on-line state and parameter estimation of a reasonably large class of nonlinear continuous-time systems using a step-by-step sliding mode observer approach. The method proposed can also be used for adaptation to parameters that vary with time. The other interesting feature of the method is that it is easily implementable in real-time. The efficiency of this technique is demonstrated via the on-line estimation of the electrical parameters and rotor flux of an induction motor. This application is based on the standard model of the induction motor expressed in rotor coordinates with the stator current and voltage as well as the rotor speed assumed to be measurable.Real-time implementation results are then reported and the ability of the algorithm to rapidly estimate the motor parameters is demonstrated. These results show the robustness of this approach with respect to measurement noise, discretization effects, parameter uncertainties and modeling inaccuracies.Comparisons between the results obtained and those of the classical recursive least square algorithm are also presented. The real-time implementation results show that the proposed algorithm gives better performance than the recursive least square method in terms of the convergence rate and the robustness with respect to measurement noise.
Estimation of parameter sensitivities for stochastic reaction networks
Gupta, Ankit
2016-01-07
Quantification of the effects of parameter uncertainty is an important and challenging problem in Systems Biology. We consider this problem in the context of stochastic models of biochemical reaction networks where the dynamics is described as a continuous-time Markov chain whose states represent the molecular counts of various species. For such models, effects of parameter uncertainty are often quantified by estimating the infinitesimal sensitivities of some observables with respect to model parameters. The aim of this talk is to present a holistic approach towards this problem of estimating parameter sensitivities for stochastic reaction networks. Our approach is based on a generic formula which allows us to construct efficient estimators for parameter sensitivity using simulations of the underlying model. We will discuss how novel simulation techniques, such as tau-leaping approximations, multi-level methods etc. can be easily integrated with our approach and how one can deal with stiff reaction networks where reactions span multiple time-scales. We will demonstrate the efficiency and applicability of our approach using many examples from the biological literature.
Hybrid fault diagnosis of nonlinear systems using neural parameter estimators.
Sobhani-Tehrani, E; Talebi, H A; Khorasani, K
2014-02-01
This paper presents a novel integrated hybrid approach for fault diagnosis (FD) of nonlinear systems taking advantage of both the system's mathematical model and the adaptive nonlinear approximation capability of computational intelligence techniques. Unlike most FD techniques, the proposed solution simultaneously accomplishes fault detection, isolation, and identification (FDII) within a unified diagnostic module. At the core of this solution is a bank of adaptive neural parameter estimators (NPEs) associated with a set of single-parameter fault models. The NPEs continuously estimate unknown fault parameters (FPs) that are indicators of faults in the system. Two NPE structures, series-parallel and parallel, are developed with their exclusive set of desirable attributes. The parallel scheme is extremely robust to measurement noise and possesses a simpler, yet more solid, fault isolation logic. In contrast, the series-parallel scheme displays short FD delays and is robust to closed-loop system transients due to changes in control commands. Finally, a fault tolerant observer (FTO) is designed to extend the capability of the two NPEs that originally assumes full state measurements for systems that have only partial state measurements. The proposed FTO is a neural state estimator that can estimate unmeasured states even in the presence of faults. The estimated and the measured states then comprise the inputs to the two proposed FDII schemes. Simulation results for FDII of reaction wheels of a three-axis stabilized satellite in the presence of disturbances and noise demonstrate the effectiveness of the proposed FDII solutions under partial state measurements.
韩明
2013-01-01
作者以前提出了一种新的参数估计方法——E-Bayes估计法,对二项分布的可靠度,给出了E-Bayes估计的定义、E-Bayes估计和多层Bayes估计公式,但没有给出E-Bayes估计的性质.该文给出了二项分布可靠度F-Bayes估计的性质.%Previously, the author introduces a new parameter estimation method-E-Bayesian estimation method, to estimate the reliability derived form Binomial distribution, the definition of E-Bayesian estimation of the reliability is provided; moreover, formulas of E-Bayesian estimation and hierarchical Bayesian estimation for the reliability are also provided, but the author did not provide propertiy of E-Bayesian estimation. This paper, properties of E-Bayesian estimation are provided.
Sommer, Helle Mølgaard; Holst, Helle; Spliid, Henrik;
1995-01-01
and the growth of the biomass are described by the Monod model consisting of two nonlinear coupled first-order differential equations. The objective of this study was to estimate the kinetic parameters in the Monod model and to test whether the parameters from the three identical experiments have the same values....... Estimation of the parameters was obtained using an iterative maximum likelihood method and the test used was an approximative likelihood ratio test. The test showed that the three sets of parameters were identical only on a 4% alpha level....
Rajesh Singh
2016-06-01
Full Text Available In this paper, the failure intensity has been characterized by one parameter length biased exponential class Software Reliability Growth Model (SRGM considering the Poisson process of occurrence of software failures. This proposed length biased exponential class model is a function of parameters namely; total number of failures θ0 and scale parameter θ1. It is assumed that very little or no information is available about both these parameters. The Bayes estimators for parameters θ0 and θ1 have been obtained using non-informative priors for each parameter under square error loss function. The Monte Carlo simulation technique is used to study the performance of proposed Bayes estimators against their corresponding maximum likelihood estimators on the basis of risk efficiencies. It is concluded that both the proposed Bayes estimators of total number of failures and scale parameter perform well for proper choice of execution time.
Estimating Arrhenius parameters using temperature programmed molecular dynamics
Imandi, Venkataramana; Chatterjee, Abhijit
2016-07-01
Kinetic rates at different temperatures and the associated Arrhenius parameters, whenever Arrhenius law is obeyed, are efficiently estimated by applying maximum likelihood analysis to waiting times collected using the temperature programmed molecular dynamics method. When transitions involving many activated pathways are available in the dataset, their rates may be calculated using the same collection of waiting times. Arrhenius behaviour is ascertained by comparing rates at the sampled temperatures with ones from the Arrhenius expression. Three prototype systems with corrugated energy landscapes, namely, solvated alanine dipeptide, diffusion at the metal-solvent interphase, and lithium diffusion in silicon, are studied to highlight various aspects of the method. The method becomes particularly appealing when the Arrhenius parameters can be used to find rates at low temperatures where transitions are rare. Systematic coarse-graining of states can further extend the time scales accessible to the method. Good estimates for the rate parameters are obtained with 500-1000 waiting times.
Estimation of Soft Tissue Mechanical Parameters from Robotic Manipulation Data.
Boonvisut, Pasu; Cavuşoğlu, M Cenk
2013-10-01
Robotic motion planning algorithms used for task automation in robotic surgical systems rely on availability of accurate models of target soft tissue's deformation. Relying on generic tissue parameters in constructing the tissue deformation models is problematic because, biological tissues are known to have very large (inter- and intra-subject) variability. A priori mechanical characterization (e.g., uniaxial bench test) of the target tissues before a surgical procedure is also not usually practical. In this paper, a method for estimating mechanical parameters of soft tissue from sensory data collected during robotic surgical manipulation is presented. The method uses force data collected from a multiaxial force sensor mounted on the robotic manipulator, and tissue deformation data collected from a stereo camera system. The tissue parameters are then estimated using an inverse finite element method. The effects of measurement and modeling uncertainties on the proposed method are analyzed in simulation. The results of experimental evaluation of the method are also presented.
Estimating Arrhenius parameters using temperature programmed molecular dynamics.
Imandi, Venkataramana; Chatterjee, Abhijit
2016-07-21
Kinetic rates at different temperatures and the associated Arrhenius parameters, whenever Arrhenius law is obeyed, are efficiently estimated by applying maximum likelihood analysis to waiting times collected using the temperature programmed molecular dynamics method. When transitions involving many activated pathways are available in the dataset, their rates may be calculated using the same collection of waiting times. Arrhenius behaviour is ascertained by comparing rates at the sampled temperatures with ones from the Arrhenius expression. Three prototype systems with corrugated energy landscapes, namely, solvated alanine dipeptide, diffusion at the metal-solvent interphase, and lithium diffusion in silicon, are studied to highlight various aspects of the method. The method becomes particularly appealing when the Arrhenius parameters can be used to find rates at low temperatures where transitions are rare. Systematic coarse-graining of states can further extend the time scales accessible to the method. Good estimates for the rate parameters are obtained with 500-1000 waiting times.
Prediction and simulation errors in parameter estimation for nonlinear systems
Aguirre, Luis A.; Barbosa, Bruno H. G.; Braga, Antônio P.
2010-11-01
This article compares the pros and cons of using prediction error and simulation error to define cost functions for parameter estimation in the context of nonlinear system identification. To avoid being influenced by estimators of the least squares family (e.g. prediction error methods), and in order to be able to solve non-convex optimisation problems (e.g. minimisation of some norm of the free-run simulation error), evolutionary algorithms were used. Simulated examples which include polynomial, rational and neural network models are discussed. Our results—obtained using different model classes—show that, in general the use of simulation error is preferable to prediction error. An interesting exception to this rule seems to be the equation error case when the model structure includes the true model. In the case of error-in-variables, although parameter estimation is biased in both cases, the algorithm based on simulation error is more robust.
Semidefinite Programming for Approximate Maximum Likelihood Sinusoidal Parameter Estimation
Kenneth W. K. Lui
2009-01-01
Full Text Available We study the convex optimization approach for parameter estimation of several sinusoidal models, namely, single complex/real tone, multiple complex sinusoids, and single two-dimensional complex tone, in the presence of additive Gaussian noise. The major difficulty for optimally determining the parameters is that the corresponding maximum likelihood (ML estimators involve finding the global minimum or maximum of multimodal cost functions because the frequencies are nonlinear in the observed signals. By relaxing the nonconvex ML formulations using semidefinite programs, high-fidelity approximate solutions are obtained in a globally optimum fashion. Computer simulations are included to contrast the estimation performance of the proposed semi-definite relaxation methods with the iterative quadratic maximum likelihood technique as well as Cramér-Rao lower bound.
Semidefinite Programming for Approximate Maximum Likelihood Sinusoidal Parameter Estimation
Lui, Kenneth W. K.; So, H. C.
2009-12-01
We study the convex optimization approach for parameter estimation of several sinusoidal models, namely, single complex/real tone, multiple complex sinusoids, and single two-dimensional complex tone, in the presence of additive Gaussian noise. The major difficulty for optimally determining the parameters is that the corresponding maximum likelihood (ML) estimators involve finding the global minimum or maximum of multimodal cost functions because the frequencies are nonlinear in the observed signals. By relaxing the nonconvex ML formulations using semidefinite programs, high-fidelity approximate solutions are obtained in a globally optimum fashion. Computer simulations are included to contrast the estimation performance of the proposed semi-definite relaxation methods with the iterative quadratic maximum likelihood technique as well as Cramér-Rao lower bound.
Reliability estimation for single dichotomous items based on Mokken's IRT model
Meijer, R R; Sijtsma, K; Molenaar, Ivo W
1995-01-01
Item reliability is of special interest for Mokken's nonparametric item response theory, and is useful for the evaluation of item quality in nonparametric test construction research. It is also of interest for nonparametric person-fit analysis. Three methods for the estimation of the reliability of
Reliability estimation for single dichotomous items based on Mokken's IRT model
Meijer, Rob R.; Sijtsma, Klaas; Molenaar, Ivo W.
1995-01-01
Item reliability is of special interest for Mokken’s nonparametric item response theory, and is useful for the evaluation of item quality in nonparametric test construction research. It is also of interest for nonparametric person-fit analysis. Three methods for the estimation of the reliability of
Coefficient Alpha as an Estimate of Test Reliability under Violation of Two Assumptions.
Zimmerman, Donald W.; And Others
1993-01-01
Coefficient alpha was examined through computer simulation as an estimate of test reliability under violation of two assumptions. Coefficient alpha underestimated reliability under violation of the assumption of essential tau-equivalence of subtest scores and overestimated it under violation of the assumption of uncorrelated subtest error scores.…
Estimation of Internal Consistency Reliability When Test Parts Vary in Effective Length.
Feldt, Leonard S.; Charter, Richard A.
2003-01-01
Evaluating a test's reliability often requires dividing it into 3 or more unequal parts, which causes violation of the tau equivalence assumption of Cronbach's alpha. This article presents a criterion for abandoning alpha and an approach for computing a more appropriate estimate of reliability, the Gilmer-Feldt coefficient. (Author)
Comparison of Parameter Estimation Methods for Transformer Weibull Lifetime Modelling
ZHOU Dan; LI Chengrong; WANG Zhongdong
2013-01-01
Two-parameter Weibull distribution is the most widely adopted lifetime model for power transformers.An appropriate parameter estimation method is essential to guarantee the accuracy of a derived Weibull lifetime model.Six popular parameter estimation methods (i.e.the maximum likelihood estimation method,two median rank regression methods including the one regressing X on Y and the other one regressing Y on X,the Kaplan-Meier method,the method based on cumulative hazard plot,and the Li's method) are reviewed and compared in order to find the optimal one that suits transformer's Weibull lifetime modelling.The comparison took several different scenarios into consideration:10 000 sets of lifetime data,each of which had a sampling size of 40 ～ 1 000 and a censoring rate of 90％,were obtained by Monte-Carlo simulations for each scienario.Scale and shape parameters of Weibull distribution estimated by the six methods,as well as their mean value,median value and 90％ confidence band are obtained.The cross comparison of these results reveals that,among the six methods,the maximum likelihood method is the best one,since it could provide the most accurate Weibull parameters,i.e.parameters having the smallest bias in both mean and median values,as well as the shortest length of the 90％ confidence band.The maximum likelihood method is therefore recommended to be used over the other methods in transformer Weibull lifetime modelling.
Khan, M.A.; Kerkhoff, Hans G.
2013-01-01
System dependability has become important for critical applications in recent years as technology is moving towards smaller dimensions. Achieving high dependability can be supported by reliability estimations during the operational life. In addition this requires a workflow for regularly monitoring
Khan, Muhammad Aamir; Kerkhoff, Hans G.
2013-01-01
System dependability has become important for critical applications in recent years as technology is moving towards smaller dimensions. Achieving high dependability can be supported by reliability estimations during the operational life. In addition this requires a workflow for regularly monitoring
Estimating model parameters in nonautonomous chaotic systems using synchronization
Yang, Xiaoli; Xu, Wei; Sun, Zhongkui
2007-05-01
In this Letter, a technique is addressed for estimating unknown model parameters of multivariate, in particular, nonautonomous chaotic systems from time series of state variables. This technique uses an adaptive strategy for tracking unknown parameters in addition to a linear feedback coupling for synchronizing systems, and then some general conditions, by means of the periodic version of the LaSalle invariance principle for differential equations, are analytically derived to ensure precise evaluation of unknown parameters and identical synchronization between the concerned experimental system and its corresponding receiver one. Exemplifies are presented by employing a parametrically excited 4D new oscillator and an additionally excited Ueda oscillator. The results of computer simulations reveal that the technique not only can quickly track the desired parameter values but also can rapidly respond to changes in operating parameters. In addition, the technique can be favorably robust against the effect of noise when the experimental system is corrupted by bounded disturbance and the normalized absolute error of parameter estimation grows almost linearly with the cutoff value of noise strength in simulation.
Estimating model parameters in nonautonomous chaotic systems using synchronization
Yang, Xiaoli [Department of Applied Mathematics, Northwestern Polytechnical University, Xi' an 710072 (China)]. E-mail: yangxl205@mail.nwpu.edu.cn; Xu, Wei [Department of Applied Mathematics, Northwestern Polytechnical University, Xi' an 710072 (China); Sun, Zhongkui [Department of Applied Mathematics, Northwestern Polytechnical University, Xi' an 710072 (China)
2007-05-07
In this Letter, a technique is addressed for estimating unknown model parameters of multivariate, in particular, nonautonomous chaotic systems from time series of state variables. This technique uses an adaptive strategy for tracking unknown parameters in addition to a linear feedback coupling for synchronizing systems, and then some general conditions, by means of the periodic version of the LaSalle invariance principle for differential equations, are analytically derived to ensure precise evaluation of unknown parameters and identical synchronization between the concerned experimental system and its corresponding receiver one. Exemplifies are presented by employing a parametrically excited 4D new oscillator and an additionally excited Ueda oscillator. The results of computer simulations reveal that the technique not only can quickly track the desired parameter values but also can rapidly respond to changes in operating parameters. In addition, the technique can be favorably robust against the effect of noise when the experimental system is corrupted by bounded disturbance and the normalized absolute error of parameter estimation grows almost linearly with the cutoff value of noise strength in simulation.
Influence of measurement errors and estimated parameters on combustion diagnosis
Payri, F.; Molina, S.; Martin, J. [CMT-Motores Termicos, Universidad Politecnica de Valencia, Camino de Vera s/n. 46022 Valencia (Spain); Armas, O. [Departamento de Mecanica Aplicada e Ingenieria de proyectos, Universidad de Castilla-La Mancha. Av. Camilo Jose Cela s/n 13071,Ciudad Real (Spain)
2006-02-01
Thermodynamic diagnosis models are valuable tools for the study of Diesel combustion. Inputs required by such models comprise measured mean and instantaneous variables, together with suitable values for adjustable parameters used in different submodels. In the case of measured variables, one may estimate the uncertainty associated with measurement errors; however, the influence of errors in model parameter estimation may not be so easily established on an experimental basis. In this paper, a simulated pressure cycle has been used along with known input parameters, so that any uncertainty in the inputs is avoided. Then, the influence of errors in measured variables and geometric and heat transmission parameters on the results of a diagnosis combustion model for direct injection diesel engines have been studied. This procedure allowed to establish the relative importance of these parameters and to set limits to the maximal errors of the model, accounting for both the maximal expected errors in the input parameters and the sensitivity of the model to those errors. (author)
Jonathan R Karr
2015-05-01
Full Text Available Whole-cell models that explicitly represent all cellular components at the molecular level have the potential to predict phenotype from genotype. However, even for simple bacteria, whole-cell models will contain thousands of parameters, many of which are poorly characterized or unknown. New algorithms are needed to estimate these parameters and enable researchers to build increasingly comprehensive models. We organized the Dialogue for Reverse Engineering Assessments and Methods (DREAM 8 Whole-Cell Parameter Estimation Challenge to develop new parameter estimation algorithms for whole-cell models. We asked participants to identify a subset of parameters of a whole-cell model given the model's structure and in silico "experimental" data. Here we describe the challenge, the best performing methods, and new insights into the identifiability of whole-cell models. We also describe several valuable lessons we learned toward improving future challenges. Going forward, we believe that collaborative efforts supported by inexpensive cloud computing have the potential to solve whole-cell model parameter estimation.
Modeling Parameters of Reliability of Technological Processes of Hydrocarbon Pipeline Transportation
Shalay Viktor
2016-01-01
Full Text Available On the basis of methods of system analysis and parametric reliability theory, the mathematical modeling of processes of oil and gas equipment operation in reliability monitoring was conducted according to dispatching data. To check the quality of empiric distribution coordination , an algorithm and mathematical methods of analysis are worked out in the on-line mode in a changing operating conditions. An analysis of physical cause-and-effect relations mechanism between the key factors and changing parameters of technical systems of oil and gas facilities is made, the basic types of technical distribution parameters are defined. Evaluation of the adequacy the analyzed parameters of the type of distribution is provided by using a criterion A.Kolmogorov, as the most universal, accurate and adequate to verify the distribution of continuous processes of complex multiple-technical systems. Methods of calculation are provided for supervising by independent bodies for risk assessment and safety facilities.
Confidence interval based parameter estimation--a new SOCR applet and activity.
Nicolas Christou
Full Text Available Many scientific investigations depend on obtaining data-driven, accurate, robust and computationally-tractable parameter estimates. In the face of unavoidable intrinsic variability, there are different algorithmic approaches, prior assumptions and fundamental principles for computing point and interval estimates. Efficient and reliable parameter estimation is critical in making inference about observable experiments, summarizing process characteristics and prediction of experimental behaviors. In this manuscript, we demonstrate simulation, construction, validation and interpretation of confidence intervals, under various assumptions, using the interactive web-based tools provided by the Statistics Online Computational Resource (http://www.SOCR.ucla.edu. Specifically, we present confidence interval examples for population means, with known or unknown population standard deviation; population variance; population proportion (exact and approximate, as well as confidence intervals based on bootstrapping or the asymptotic properties of the maximum likelihood estimates. Like all SOCR resources, these confidence interval resources may be openly accessed via an Internet-connected Java-enabled browser. The SOCR confidence interval applet enables the user to empirically explore and investigate the effects of the confidence-level, the sample-size and parameter of interest on the corresponding confidence interval. Two applications of the new interval estimation computational library are presented. The first one is a simulation of confidence interval estimating the US unemployment rate and the second application demonstrates the computations of point and interval estimates of hippocampal surface complexity for Alzheimers disease patients, mild cognitive impairment subjects and asymptomatic controls.
Confidence interval based parameter estimation--a new SOCR applet and activity.
Christou, Nicolas; Dinov, Ivo D
2011-01-01
Many scientific investigations depend on obtaining data-driven, accurate, robust and computationally-tractable parameter estimates. In the face of unavoidable intrinsic variability, there are different algorithmic approaches, prior assumptions and fundamental principles for computing point and interval estimates. Efficient and reliable parameter estimation is critical in making inference about observable experiments, summarizing process characteristics and prediction of experimental behaviors. In this manuscript, we demonstrate simulation, construction, validation and interpretation of confidence intervals, under various assumptions, using the interactive web-based tools provided by the Statistics Online Computational Resource (http://www.SOCR.ucla.edu). Specifically, we present confidence interval examples for population means, with known or unknown population standard deviation; population variance; population proportion (exact and approximate), as well as confidence intervals based on bootstrapping or the asymptotic properties of the maximum likelihood estimates. Like all SOCR resources, these confidence interval resources may be openly accessed via an Internet-connected Java-enabled browser. The SOCR confidence interval applet enables the user to empirically explore and investigate the effects of the confidence-level, the sample-size and parameter of interest on the corresponding confidence interval. Two applications of the new interval estimation computational library are presented. The first one is a simulation of confidence interval estimating the US unemployment rate and the second application demonstrates the computations of point and interval estimates of hippocampal surface complexity for Alzheimers disease patients, mild cognitive impairment subjects and asymptomatic controls.
Multi-parameter estimating photometric redshifts with artificial neural networks
Li, L; Zhao, Y; Yang, D; Li, Lili; Zhang, Yanxia; Zhao, Yongheng; Yang, Dawei
2006-01-01
We calculate photometric redshifts from the Sloan Digital Sky Survey Data Release 2 Galaxy Sample using artificial neural networks (ANNs). Different input patterns based on various parameters (e.g. magnitude, color index, flux information) are explored and their performances for redshift prediction are compared. For ANN technique, any parameter may be easily incorporated as input, but our results indicate that using reddening magnitude produces photometric redshift accuracies often better than the Petrosian magnitude or model magnitude. Similarly, the model magnitude is also superior to Petrosian magnitude. In addition, ANNs also show better performance when the more effective parameters increase in the training set. Finally, the method is tested on a sample of 79, 346 galaxies from the SDSS DR2. When using 19 parameters based on the reddening magnitude, the rms error in redshift estimation is sigma(z)=0.020184. The ANN is highly competitive tool when compared with traditional template-fitting methods where a...
Cosmological parameter estimation using Particle Swarm Optimization (PSO)
Prasad, Jayanti
2011-01-01
Obtaining the set of cosmological parameters consistent with observational data is an important exercise in current cosmological research. It involves finding the global maximum of the likelihood function in the multi-dimensional parameter space. Currently sampling based methods, which are in general stochastic in nature, like Markov-Chain Monte Carlo(MCMC), are being commonly used for parameter estimation. The beauty of stochastic methods is that the computational cost grows, at the most, linearly in place of exponentially (as in grid based approaches) with the dimensionality of the search space. MCMC methods sample the full joint probability distribution (posterior) from which one and two dimensional probability distributions, best fit (average) values of parameters and then error bars can be computed. In the present work we demonstrate the application of another stochastic method, named Particle Swarm Optimization (PSO), that is widely used in the field of engineering and artificial intelligence, for cosmo...
Adaptive Estimation of Intravascular Shear Rate Based on Parameter Optimization
Nitta, Naotaka; Takeda, Naoto
2008-05-01
The relationships between the intravascular wall shear stress, controlled by flow dynamics, and the progress of arteriosclerosis plaque have been clarified by various studies. Since the shear stress is determined by the viscosity coefficient and shear rate, both factors must be estimated accurately. In this paper, an adaptive method for improving the accuracy of quantitative shear rate estimation was investigated. First, the parameter dependence of the estimated shear rate was investigated in terms of the differential window width and the number of averaged velocity profiles based on simulation and experimental data, and then the shear rate calculation was optimized. The optimized result revealed that the proposed adaptive method of shear rate estimation was effective for improving the accuracy of shear rate calculation.
Consistent Parameter and Transfer Function Estimation using Context Free Grammars
Klotz, Daniel; Herrnegger, Mathew; Schulz, Karsten
2017-04-01
This contribution presents a method for the inference of transfer functions for rainfall-runoff models. Here, transfer functions are defined as parametrized (functional) relationships between a set of spatial predictors (e.g. elevation, slope or soil texture) and model parameters. They are ultimately used for estimation of consistent, spatially distributed model parameters from a limited amount of lumped global parameters. Additionally, they provide a straightforward method for parameter extrapolation from one set of basins to another and can even be used to derive parameterizations for multi-scale models [see: Samaniego et al., 2010]. Yet, currently an actual knowledge of the transfer functions is often implicitly assumed. As a matter of fact, for most cases these hypothesized transfer functions can rarely be measured and often remain unknown. Therefore, this contribution presents a general method for the concurrent estimation of the structure of transfer functions and their respective (global) parameters. Note, that by consequence an estimation of the distributed parameters of the rainfall-runoff model is also undertaken. The method combines two steps to achieve this. The first generates different possible transfer functions. The second then estimates the respective global transfer function parameters. The structural estimation of the transfer functions is based on the context free grammar concept. Chomsky first introduced context free grammars in linguistics [Chomsky, 1956]. Since then, they have been widely applied in computer science. But, to the knowledge of the authors, they have so far not been used in hydrology. Therefore, the contribution gives an introduction to context free grammars and shows how they can be constructed and used for the structural inference of transfer functions. This is enabled by new methods from evolutionary computation, such as grammatical evolution [O'Neill, 2001], which make it possible to exploit the constructed grammar as a
Estimation of common cause failure parameters with periodic tests
Barros, Anne [Institut Charles Delaunay - Universite de technologie de Troyes - FRE CNRS 2848, 12, rue Marie Curie - BP 2060 -10010 Troyes cedex (France)], E-mail: anne.barros@utt.fr; Grall, Antoine [Institut Charles Delaunay - Universite de technologie de Troyes - FRE CNRS 2848, 12, rue Marie Curie - BP 2060 -10010 Troyes cedex (France); Vasseur, Dominique [Electricite de France, EDF R and D - Industrial Risk Management Department 1, av. du General de Gaulle- 92141 Clamart (France)
2009-04-15
In the specific case of safety systems, CCF parameters estimators for standby components depend on the periodic test schemes. Classically, the testing schemes are either staggered (alternation of tests on redundant components) or non-staggered (all components are tested at the same time). In reality, periodic tests schemes performed on safety components are more complex and combine staggered tests, when the plant is in operation, to non-staggered tests during maintenance and refueling outage periods of the installation. Moreover, the CCF parameters estimators described in the US literature are derived in a consistent way with US Technical Specifications constraints that do not apply on the French Nuclear Power Plants for staggered tests on standby components. Given these issues, the evaluation of CCF parameters from the operating feedback data available within EDF implies the development of methodologies that integrate the testing schemes specificities. This paper aims to formally propose a solution for the estimation of CCF parameters given two distinct difficulties respectively related to a mixed testing scheme and to the consistency with EDF's specific practices inducing systematic non-simultaneity of the observed failures in a staggered testing scheme.
Parameter estimation method for blurred cell images from fluorescence microscope
He, Fuyun; Zhang, Zhisheng; Luo, Xiaoshu; Zhao, Shulin
2016-10-01
Microscopic cell image analysis is indispensable to cell biology. Images of cells can easily degrade due to optical diffraction or focus shift, as this results in low signal-to-noise ratio (SNR) and poor image quality, hence affecting the accuracy of cell analysis and identification. For a quantitative analysis of cell images, restoring blurred images to improve the SNR is the first step. A parameter estimation method for defocused microscopic cell images based on the power law properties of the power spectrum of cell images is proposed. The circular radon transform (CRT) is used to identify the zero-mode of the power spectrum. The parameter of the CRT curve is initially estimated by an improved differential evolution algorithm. Following this, the parameters are optimized through the gradient descent method. Using synthetic experiments, it was confirmed that the proposed method effectively increased the peak SNR (PSNR) of the recovered images with high accuracy. Furthermore, experimental results involving actual microscopic cell images verified that the superiority of the proposed parameter estimation method for blurred microscopic cell images other method in terms of qualitative visual sense as well as quantitative gradient and PSNR.
Anisotropic parameter estimation using velocity variation with offset analysis
Herawati, I.; Saladin, M.; Pranowo, W.; Winardhie, S.; Priyono, A. [Faculty of Mining and Petroleum Engineering, Institut Teknologi Bandung, Jalan Ganesa 10, Bandung, 40132 (Indonesia)
2013-09-09
Seismic anisotropy is defined as velocity dependent upon angle or offset. Knowledge about anisotropy effect on seismic data is important in amplitude analysis, stacking process and time to depth conversion. Due to this anisotropic effect, reflector can not be flattened using single velocity based on hyperbolic moveout equation. Therefore, after normal moveout correction, there will still be residual moveout that relates to velocity information. This research aims to obtain anisotropic parameters, ε and δ, using two proposed methods. The first method is called velocity variation with offset (VVO) which is based on simplification of weak anisotropy equation. In VVO method, velocity at each offset is calculated and plotted to obtain vertical velocity and parameter δ. The second method is inversion method using linear approach where vertical velocity, δ, and ε is estimated simultaneously. Both methods are tested on synthetic models using ray-tracing forward modelling. Results show that δ value can be estimated appropriately using both methods. Meanwhile, inversion based method give better estimation for obtaining ε value. This study shows that estimation on anisotropic parameters rely on the accuracy of normal moveout velocity, residual moveout and offset to angle transformation.
Determination of reliability parameters of radioelectronic devices determined by thermal modes
A. V. Nikitchuk
2014-06-01
Full Text Available Statement of the problem. The reliability is important (and sometimes crucial functional characteristic for RED. So it is necessary to analyze the impact on them of destabilizing external factors - mechanical, temperature, humidity, ionizing radiation. Structural-design elements RED. SCM are the main objects for which you first need to determine the temperature of the ЕЕS and performance reliability. Determination of the temperature of the EES cells and microassemblies. The basic mathematical models are presented to determine the temperatures of the electronic structure elements of cells and microassemblies. Indicators of HEE reliability as a function of their temperature. The value of the operational failure rate of most groups RED calculated by mathematical models. These indicators include: basic failure rate, the rate regime, the coefficients that take into account changes in operational failure rate depending on various factors. Software definition of reliability parameters. The software product allows you to switch from "manual" calculation reliability REDs to a fully automated modeling components. The program is applicable for calculating the reliability and to find a more "sustainable" elements to increase the probability of failure-free operation. Conclusions. Primary tasks performed in the work are listed
METAHEURISTIC OPTIMIZATION METHODS FOR PARAMETERS ESTIMATION OF DYNAMIC SYSTEMS
V. Panteleev Andrei
2017-01-01
Full Text Available The article considers the usage of metaheuristic methods of constrained global optimization: “Big Bang - Big Crunch”, “Fireworks Algorithm”, “Grenade Explosion Method” in parameters of dynamic systems estimation, described with algebraic-differential equations. Parameters estimation is based upon the observation results from mathematical model behavior. Their values are derived after criterion minimization, which describes the total squared error of state vector coordinates from the deduced ones with precise values observation at different periods of time. Paral- lelepiped type restriction is imposed on the parameters values. Used for solving problems, metaheuristic methods of constrained global extremum don’t guarantee the result, but allow to get a solution of a rather good quality in accepta- ble amount of time. The algorithm of using metaheuristic methods is given. Alongside with the obvious methods for solving algebraic-differential equation systems, it is convenient to use implicit methods for solving ordinary differen- tial equation systems. Two ways of solving the problem of parameters evaluation are given, those parameters differ in their mathematical model. In the first example, a linear mathematical model describes the chemical action parameters change, and in the second one, a nonlinear mathematical model describes predator-prey dynamics, which characterize the changes in both kinds’ population. For each of the observed examples there are calculation results from all the three methods of optimization, there are also some recommendations for how to choose methods parameters. The obtained numerical results have demonstrated the efficiency of the proposed approach. The deduced parameters ap- proximate points slightly differ from the best known solutions, which were deduced differently. To refine the results one should apply hybrid schemes that combine classical methods of optimization of zero, first and second orders and
Mohanty, Sankhya; Hattel, Jesper Henri
2015-01-01
to generate optimized cellular scanning strategies and processing parameters, with an objective of reducing thermal asymmetries and mechanical deformations. The optimized scanning strategies are used for selective laser melting of the standard samples, and experimental and numerical results are compared....... gradients that occur during the process. While process monitoring and control of selective laser melting is an active area of research, establishing the reliability and robustness of the process still remains a challenge.In this paper, a methodology for generating reliable, optimized scanning paths...
Markov Chain Monte Carlo (MCMC) methods for parameter estimation of a novel hybrid redundant robot
Wang Yongbo, E-mail: yongbo.wang@hotmail.com [Laboratory of Intelligent Machine, Lappeenranta University of Technology, FIN-53851 Lappeenranta (Finland); Wu Huapeng; Handroos, Heikki [Laboratory of Intelligent Machine, Lappeenranta University of Technology, FIN-53851 Lappeenranta (Finland)
2011-10-15
This paper presents a statistical method for the calibration of a redundantly actuated hybrid serial-parallel robot IWR (Intersector Welding Robot). The robot under study will be used to carry out welding, machining, and remote handing for the assembly of vacuum vessel of International Thermonuclear Experimental Reactor (ITER). The robot has ten degrees of freedom (DOF), among which six DOF are contributed by the parallel mechanism and the rest are from the serial mechanism. In this paper, a kinematic error model which involves 54 unknown geometrical error parameters is developed for the proposed robot. Based on this error model, the mean values of the unknown parameters are statistically analyzed and estimated by means of Markov Chain Monte Carlo (MCMC) approach. The computer simulation is conducted by introducing random geometric errors and measurement poses which represent the corresponding real physical behaviors. The simulation results of the marginal posterior distributions of the estimated model parameters indicate that our method is reliable and robust.
Model calibration and parameter estimation for environmental and water resource systems
Sun, Ne-Zheng
2015-01-01
This three-part book provides a comprehensive and systematic introduction to the development of useful models for complex systems. Part 1 covers the classical inverse problem for parameter estimation in both deterministic and statistical frameworks, Part 2 is dedicated to system identification, hyperparameter estimation, and model dimension reduction, and Part 3 considers how to collect data and construct reliable models for prediction and decision-making. For the first time, topics such as multiscale inversion, stochastic field parameterization, level set method, machine learning, global sensitivity analysis, data assimilation, model uncertainty quantification, robust design, and goal-oriented modeling, are systematically described and summarized in a single book from the perspective of model inversion, and elucidated with numerical examples from environmental and water resources modeling. Readers of this book will not only learn basic concepts and methods for simple parameter estimation, but also get famili...
A. Elsonbaty
2014-10-01
Full Text Available In this article, the adaptive chaos synchronization technique is implemented by an electronic circuit and applied to the hyperchaotic system proposed by Chen et al. We consider the more realistic and practical case where all the parameters of the master system are unknowns. We propose and implement an electronic circuit that performs the estimation of the unknown parameters and the updating of the parameters of the slave system automatically, and hence it achieves the synchronization. To the best of our knowledge, this is the first attempt to implement a circuit that estimates the values of the unknown parameters of chaotic system and achieves synchronization. The proposed circuit has a variety of suitable real applications related to chaos encryption and cryptography. The outputs of the implemented circuits and numerical simulation results are shown to view the performance of the synchronized system and the proposed circuit.
Reliability/Cost Evaluation on Power System connected with Wind Power for the Reserve Estimation
Lee, Go-Eun; Cha, Seung-Tae; Shin, Je-Seok;
2012-01-01
Wind power is ideally a renewable energy with no fuel cost, but has a risk to reduce reliability of the whole system because of uncertainty of the output. If the reserve of the system is increased, the reliability of the system may be improved. However, the cost would be increased. Therefore...... the reserve needs to be estimated considering the trade-off between reliability and economic aspects. This paper suggests a methodology to estimate the appropriate reserve, when wind power is connected to the power system. As a case study, when wind power is connected to power system of Korea, the effects...
Parameter Estimation for Groundwater Models under Uncertain Irrigation Data.
Demissie, Yonas; Valocchi, Albert; Cai, Ximing; Brozovic, Nicholas; Senay, Gabriel; Gebremichael, Mekonnen
2015-01-01
The success of modeling groundwater is strongly influenced by the accuracy of the model parameters that are used to characterize the subsurface system. However, the presence of uncertainty and possibly bias in groundwater model source/sink terms may lead to biased estimates of model parameters and model predictions when the standard regression-based inverse modeling techniques are used. This study first quantifies the levels of bias in groundwater model parameters and predictions due to the presence of errors in irrigation data. Then, a new inverse modeling technique called input uncertainty weighted least-squares (IUWLS) is presented for unbiased estimation of the parameters when pumping and other source/sink data are uncertain. The approach uses the concept of generalized least-squares method with the weight of the objective function depending on the level of pumping uncertainty and iteratively adjusted during the parameter optimization process. We have conducted both analytical and numerical experiments, using irrigation pumping data from the Republican River Basin in Nebraska, to evaluate the performance of ordinary least-squares (OLS) and IUWLS calibration methods under different levels of uncertainty of irrigation data and calibration conditions. The result from the OLS method shows the presence of statistically significant (p irrigation pumping uncertainties during the calibration procedures, the proposed IUWLS is able to minimize the bias effectively without adding significant computational burden to the calibration processes.
Terrain mechanical parameters online estimation for lunar rovers
Liu, Bing; Cui, Pingyuan; Ju, Hehua
2007-11-01
This paper presents a new method for terrain mechanical parameters estimation for a wheeled lunar rover. First, after deducing the detailed distribution expressions of normal stress and sheer stress at the wheel-terrain interface, the force/torque balance equations of the drive wheel for computing terrain mechanical parameters is derived through analyzing the rigid drive wheel of a lunar rover which moves with uniform speed in deformable terrain. Then a two-points Guass-Lengendre numerical integral method is used to simplify the balance equations, after simplifying and rearranging the resolve model are derived which are composed of three non-linear equations. Finally the iterative method of Newton and the steepest descent method are combined to solve the non-linear equations, and the outputs of on-board virtual sensors are used for computing terrain key mechanical parameters i.e. internal friction angle and press-sinkage parameters. Simulation results show correctness under high noises disturbance and effectiveness with low computational complexity, which allows a lunar rover for online terrain mechanical parameters estimation.
Parameter estimation for groundwater models under uncertain irrigation data
Demissie, Yonas; Valocchi, Albert J.; Cai, Ximing; Brozovic, Nicholas; Senay, Gabriel; Gebremichael, Mekonnen
2015-01-01
The success of modeling groundwater is strongly influenced by the accuracy of the model parameters that are used to characterize the subsurface system. However, the presence of uncertainty and possibly bias in groundwater model source/sink terms may lead to biased estimates of model parameters and model predictions when the standard regression-based inverse modeling techniques are used. This study first quantifies the levels of bias in groundwater model parameters and predictions due to the presence of errors in irrigation data. Then, a new inverse modeling technique called input uncertainty weighted least-squares (IUWLS) is presented for unbiased estimation of the parameters when pumping and other source/sink data are uncertain. The approach uses the concept of generalized least-squares method with the weight of the objective function depending on the level of pumping uncertainty and iteratively adjusted during the parameter optimization process. We have conducted both analytical and numerical experiments, using irrigation pumping data from the Republican River Basin in Nebraska, to evaluate the performance of ordinary least-squares (OLS) and IUWLS calibration methods under different levels of uncertainty of irrigation data and calibration conditions. The result from the OLS method shows the presence of statistically significant (p irrigation pumping uncertainties during the calibration procedures, the proposed IUWLS is able to minimize the bias effectively without adding significant computational burden to the calibration processes.
Observable Priors: Limiting Biases in Estimated Parameters for Incomplete Orbits
Kosmo, Kelly; Martinez, Gregory; Hees, Aurelien; Witzel, Gunther; Ghez, Andrea M.; Do, Tuan; Sitarski, Breann; Chu, Devin; Dehghanfar, Arezu
2017-01-01
Over twenty years of monitoring stellar orbits at the Galactic center has provided an unprecedented opportunity to study the physics and astrophysics of the supermassive black hole (SMBH) at the center of the Milky Way Galaxy. In order to constrain the mass of and distance to the black hole, and to evaluate its gravitational influence on orbiting bodies, we use Bayesian statistics to infer black hole and stellar orbital parameters from astrometric and radial velocity measurements of stars orbiting the central SMBH. Unfortunately, most of the short period stars in the Galactic center have periods much longer than our twenty year time baseline of observations, resulting in incomplete orbital phase coverage--potentially biasing fitted parameters. Using the Bayesian statistical framework, we evaluate biases in the black hole and orbital parameters of stars with varying phase coverage, using various prior models to fit the data. We present evidence that incomplete phase coverage of an orbit causes prior assumptions to bias statistical quantities, and propose a solution to reduce these biases for orbits with low phase coverage. The explored solution assumes uniformity in the observables rather than in the inferred model parameters, as is the current standard method of orbit fitting. Of the cases tested, priors that assume uniform astrometric and radial velocity observables reduce the biases in the estimated parameters. The proposed method will not only improve orbital estimates of stars orbiting the central SMBH, but can also be extended to other orbiting bodies with low phase coverage such as visual binaries and exoplanets.
Bayesian adaptive Markov chain Monte Carlo estimation of genetic parameters.
Mathew, B; Bauer, A M; Koistinen, P; Reetz, T C; Léon, J; Sillanpää, M J
2012-10-01
Accurate and fast estimation of genetic parameters that underlie quantitative traits using mixed linear models with additive and dominance effects is of great importance in both natural and breeding populations. Here, we propose a new fast adaptive Markov chain Monte Carlo (MCMC) sampling algorithm for the estimation of genetic parameters in the linear mixed model with several random effects. In the learning phase of our algorithm, we use the hybrid Gibbs sampler to learn the covariance structure of the variance components. In the second phase of the algorithm, we use this covariance structure to formulate an effective proposal distribution for a Metropolis-Hastings algorithm, which uses a likelihood function in which the random effects have been integrated out. Compared with the hybrid Gibbs sampler, the new algorithm had better mixing properties and was approximately twice as fast to run. Our new algorithm was able to detect different modes in the posterior distribution. In addition, the posterior mode estimates from the adaptive MCMC method were close to the REML (residual maximum likelihood) estimates. Moreover, our exponential prior for inverse variance components was vague and enabled the estimated mode of the posterior variance to be practically zero, which was in agreement with the support from the likelihood (in the case of no dominance). The method performance is illustrated using simulated data sets with replicates and field data in barley.
Parameter Estimation as a Problem in Statistical Thermodynamics
Earle, Keith A.; Schneider, David J.
2011-01-01
In this work, we explore the connections between parameter fitting and statistical thermodynamics using the maxent principle of Jaynes as a starting point. In particular, we show how signal averaging may be described by a suitable one particle partition function, modified for the case of a variable number of particles. These modifications lead to an entropy that is extensive in the number of measurements in the average. Systematic error may be interpreted as a departure from ideal gas behavior. In addition, we show how to combine measurements from different experiments in an unbiased way in order to maximize the entropy of simultaneous parameter fitting. We suggest that fit parameters may be interpreted as generalized coordinates and the forces conjugate to them may be derived from the system partition function. From this perspective, the parameter fitting problem may be interpreted as a process where the system (spectrum) does work against internal stresses (non-optimum model parameters) to achieve a state of minimum free energy/maximum entropy. Finally, we show how the distribution function allows us to define a geometry on parameter space, building on previous work[1, 2]. This geometry has implications for error estimation and we outline a program for incorporating these geometrical insights into an automated parameter fitting algorithm. PMID:21927520
Maximum-likelihood estimation of circle parameters via convolution.
Zelniker, Emanuel E; Clarkson, I Vaughan L
2006-04-01
The accurate fitting of a circle to noisy measurements of circumferential points is a much studied problem in the literature. In this paper, we present an interpretation of the maximum-likelihood estimator (MLE) and the Delogne-Kåsa estimator (DKE) for circle-center and radius estimation in terms of convolution on an image which is ideal in a certain sense. We use our convolution-based MLE approach to find good estimates for the parameters of a circle in digital images. In digital images, it is then possible to treat these estimates as preliminary estimates into various other numerical techniques which further refine them to achieve subpixel accuracy. We also investigate the relationship between the convolution of an ideal image with a "phase-coded kernel" (PCK) and the MLE. This is related to the "phase-coded annulus" which was introduced by Atherton and Kerbyson who proposed it as one of a number of new convolution kernels for estimating circle center and radius. We show that the PCK is an approximate MLE (AMLE). We compare our AMLE method to the MLE and the DKE as well as the Cramér-Rao Lower Bound in ideal images and in both real and synthetic digital images.
PYMORPH: automated galaxy structural parameter estimation using PYTHON
Vikram, Vinu; Wadadekar, Yogesh; Kembhavi, Ajit K.; Vijayagovindan, G. V.
2010-12-01
We present a new software pipeline - PYMORPH- for automated estimation of structural parameters of galaxies. Both parametric fits through a two-dimensional bulge disc decomposition and structural parameter measurements like concentration, asymmetry etc. are supported. The pipeline is designed to be easy to use yet flexible; individual software modules can be replaced with ease. A find-and-fit mode is available so that all galaxies in an image can be measured with a simple command. A parallel version of the PYMORPH pipeline runs on computer clusters and a virtual observatory compatible web enabled interface is under development.
Estimation of the parameters of ETAS models by Simulated Annealing
Lombardi, Anna Maria
2015-02-01
This paper proposes a new algorithm to estimate the maximum likelihood parameters of an Epidemic Type Aftershock Sequences (ETAS) model. It is based on Simulated Annealing, a versatile method that solves problems of global optimization and ensures convergence to a global optimum. The procedure is tested on both simulated and real catalogs. The main conclusion is that the method performs poorly as the size of the catalog decreases because the effect of the correlation of the ETAS parameters is more significant. These results give new insights into the ETAS model and the efficiency of the maximum-likelihood method within this context.
Estimation of drying parameters in rotary dryers using differential evolution
Lobato, F S; Jr, V Steffen; Barrozo, M A S; Arruda, E B, E-mail: vsteffen@mecanica.ufu.br, E-mail: masbarrozo@ufu.br
2008-11-01
Inverse problems arise from the necessity of obtaining parameters of theoretical models to simulate the behavior of the system for different operating conditions. Several heuristics that mimic different phenomena found in nature have been proposed for the solution of this kind of problem. In this work, the Differential Evolution Technique is used for the estimation of drying parameters in realistic rotary dryers, which is formulated as an optimization problem by using experimental data. Test case results demonstrate both the feasibility and the effectiveness of the proposed methodology.
J-A Hysteresis Model Parameters Estimation using GA
Bogomir Zidaric
2005-01-01
Full Text Available This paper presents the Jiles and Atherton (J-A hysteresis model parameter estimation for soft magnetic composite (SMC material. The calculation of Jiles and Atherton hysteresis model parameters is based on experimental data and genetic algorithms (GA. Genetic algorithms operate in a given area of possible solutions. Finding the best solution of a problem in wide area of possible solutions is uncertain. A new approach in use of genetic algorithms is proposed to overcome this uncertainty. The basis of this approach is in genetic algorithm built in another genetic algorithm.
Parameter estimation in X-ray astronomy using maximum likelihood
Wachter, K.; Leach, R.; Kellogg, E.
1979-01-01
Methods of estimation of parameter values and confidence regions by maximum likelihood and Fisher efficient scores starting from Poisson probabilities are developed for the nonlinear spectral functions commonly encountered in X-ray astronomy. It is argued that these methods offer significant advantages over the commonly used alternatives called minimum chi-squared because they rely on less pervasive statistical approximations and so may be expected to remain valid for data of poorer quality. Extensive numerical simulations of the maximum likelihood method are reported which verify that the best-fit parameter value and confidence region calculations are correct over a wide range of input spectra.
An adaptive neuro fuzzy model for estimating the reliability of component-based software systems
Kirti Tyagi
2014-01-01
Full Text Available Although many algorithms and techniques have been developed for estimating the reliability of component-based software systems (CBSSs, much more research is needed. Accurate estimation of the reliability of a CBSS is difficult because it depends on two factors: component reliability and glue code reliability. Moreover, reliability is a real-world phenomenon with many associated real-time problems. Soft computing techniques can help to solve problems whose solutions are uncertain or unpredictable. A number of soft computing approaches for estimating CBSS reliability have been proposed. These techniques learn from the past and capture existing patterns in data. The two basic elements of soft computing are neural networks and fuzzy logic. In this paper, we propose a model for estimating CBSS reliability, known as an adaptive neuro fuzzy inference system (ANFIS, that is based on these two basic elements of soft computing, and we compare its performance with that of a plain FIS (fuzzy inference system for different data sets.
Ait-El-Fquih, Boujemaa; El Gharamti, Mohamad; Hoteit, Ibrahim
2016-08-01
Ensemble Kalman filtering (EnKF) is an efficient approach to addressing uncertainties in subsurface groundwater models. The EnKF sequentially integrates field data into simulation models to obtain a better characterization of the model's state and parameters. These are generally estimated following joint and dual filtering strategies, in which, at each assimilation cycle, a forecast step by the model is followed by an update step with incoming observations. The joint EnKF directly updates the augmented state-parameter vector, whereas the dual EnKF empirically employs two separate filters, first estimating the parameters and then estimating the state based on the updated parameters. To develop a Bayesian consistent dual approach and improve the state-parameter estimates and their consistency, we propose in this paper a one-step-ahead (OSA) smoothing formulation of the state-parameter Bayesian filtering problem from which we derive a new dual-type EnKF, the dual EnKFOSA. Compared with the standard dual EnKF, it imposes a new update step to the state, which is shown to enhance the performance of the dual approach with almost no increase in the computational cost. Numerical experiments are conducted with a two-dimensional (2-D) synthetic groundwater aquifer model to investigate the performance and robustness of the proposed dual EnKFOSA, and to evaluate its results against those of the joint and dual EnKFs. The proposed scheme is able to successfully recover both the hydraulic head and the aquifer conductivity, providing further reliable estimates of their uncertainties. Furthermore, it is found to be more robust to different assimilation settings, such as the spatial and temporal distribution of the observations, and the level of noise in the data. Based on our experimental setups, it yields up to 25 % more accurate state and parameter estimations than the joint and dual approaches.
Ait-El-Fquih, Boujemaa
2016-08-12
Ensemble Kalman filtering (EnKF) is an efficient approach to addressing uncertainties in subsurface ground-water models. The EnKF sequentially integrates field data into simulation models to obtain a better characterization of the model\\'s state and parameters. These are generally estimated following joint and dual filtering strategies, in which, at each assimilation cycle, a forecast step by the model is followed by an update step with incoming observations. The joint EnKF directly updates the augmented state-parameter vector, whereas the dual EnKF empirically employs two separate filters, first estimating the parameters and then estimating the state based on the updated parameters. To develop a Bayesian consistent dual approach and improve the state-parameter estimates and their consistency, we propose in this paper a one-step-ahead (OSA) smoothing formulation of the state-parameter Bayesian filtering problem from which we derive a new dual-type EnKF, the dual EnKF(OSA). Compared with the standard dual EnKF, it imposes a new update step to the state, which is shown to enhance the performance of the dual approach with almost no increase in the computational cost. Numerical experiments are conducted with a two-dimensional (2-D) synthetic groundwater aquifer model to investigate the performance and robustness of the proposed dual EnKFOSA, and to evaluate its results against those of the joint and dual EnKFs. The proposed scheme is able to successfully recover both the hydraulic head and the aquifer conductivity, providing further reliable estimates of their uncertainties. Furthermore, it is found to be more robust to different assimilation settings, such as the spatial and temporal distribution of the observations, and the level of noise in the data. Based on our experimental setups, it yields up to 25% more accurate state and parameter estimations than the joint and dual approaches.
Integration based profile likelihood calculation for PDE constrained parameter estimation problems
Boiger, R.; Hasenauer, J.; Hroß, S.; Kaltenbacher, B.
2016-12-01
Partial differential equation (PDE) models are widely used in engineering and natural sciences to describe spatio-temporal processes. The parameters of the considered processes are often unknown and have to be estimated from experimental data. Due to partial observations and measurement noise, these parameter estimates are subject to uncertainty. This uncertainty can be assessed using profile likelihoods, a reliable but computationally intensive approach. In this paper, we present the integration based approach for the profile likelihood calculation developed by (Chen and Jennrich 2002 J. Comput. Graph. Stat. 11 714-32) and adapt it to inverse problems with PDE constraints. While existing methods for profile likelihood calculation in parameter estimation problems with PDE constraints rely on repeated optimization, the proposed approach exploits a dynamical system evolving along the likelihood profile. We derive the dynamical system for the unreduced estimation problem, prove convergence and study the properties of the integration based approach for the PDE case. To evaluate the proposed method, we compare it with state-of-the-art algorithms for a simple reaction-diffusion model for a cellular patterning process. We observe a good accuracy of the method as well as a significant speed up as compared to established methods. Integration based profile calculation facilitates rigorous uncertainty analysis for computationally demanding parameter estimation problems with PDE constraints.
Propagation channel characterization, parameter estimation, and modeling for wireless communications
Yin, Xuefeng
2016-01-01
Thoroughly covering channel characteristics and parameters, this book provides the knowledge needed to design various wireless systems, such as cellular communication systems, RFID and ad hoc wireless communication systems. It gives a detailed introduction to aspects of channels before presenting the novel estimation and modelling techniques which can be used to achieve accurate models. To systematically guide readers through the topic, the book is organised in three distinct parts. The first part covers the fundamentals of the characterization of propagation channels, including the conventional single-input single-output (SISO) propagation channel characterization as well as its extension to multiple-input multiple-output (MIMO) cases. Part two focuses on channel measurements and channel data post-processing. Wideband channel measurements are introduced, including the equipment, technology and advantages and disadvantages of different data acquisition schemes. The channel parameter estimation methods are ...
CosmoSIS: A System for MC Parameter Estimation
Zuntz, Joe [Manchester U.; Paterno, Marc [Fermilab; Jennings, Elise [Chicago U., EFI; Rudd, Douglas [U. Chicago; Manzotti, Alessandro [Chicago U., Astron. Astrophys. Ctr.; Dodelson, Scott [Chicago U., Astron. Astrophys. Ctr.; Bridle, Sarah [Manchester U.; Sehrish, Saba [Fermilab; Kowalkowski, James [Fermilab
2015-01-01
Cosmological parameter estimation is entering a new era. Large collaborations need to coordinate high-stakes analyses using multiple methods; furthermore such analyses have grown in complexity due to sophisticated models of cosmology and systematic uncertainties. In this paper we argue that modularity is the key to addressing these challenges: calculations should be broken up into interchangeable modular units with inputs and outputs clearly defined. We present a new framework for cosmological parameter estimation, CosmoSIS, designed to connect together, share, and advance development of inference tools across the community. We describe the modules already available in Cosmo- SIS, including camb, Planck, cosmic shear calculations, and a suite of samplers. We illustrate it using demonstration code that you can run out-of-the-box with the installer available at http://bitbucket.org/joezuntz/cosmosis.
Bayesian parameter estimation for chiral effective field theory
Wesolowski, Sarah; Furnstahl, Richard; Phillips, Daniel; Klco, Natalie
2016-09-01
The low-energy constants (LECs) of a chiral effective field theory (EFT) interaction in the two-body sector are fit to observable data using a Bayesian parameter estimation framework. By using Bayesian prior probability distributions (pdfs), we quantify relevant physical expectations such as LEC naturalness and include them in the parameter estimation procedure. The final result is a posterior pdf for the LECs, which can be used to propagate uncertainty resulting from the fit to data to the final observable predictions. The posterior pdf also allows an empirical test of operator redundancy and other features of the potential. We compare results of our framework with other fitting procedures, interpreting the underlying assumptions in Bayesian probabilistic language. We also compare results from fitting all partial waves of the interaction simultaneously to cross section data compared to fitting to extracted phase shifts, appropriately accounting for correlations in the data. Supported in part by the NSF and DOE.
Real-Time Parameter Estimation Using Output Error
Grauer, Jared A.
2014-01-01
Output-error parameter estimation, normally a post- ight batch technique, was applied to real-time dynamic modeling problems. Variations on the traditional algorithm were investigated with the goal of making the method suitable for operation in real time. Im- plementation recommendations are given that are dependent on the modeling problem of interest. Application to ight test data showed that accurate parameter estimates and un- certainties for the short-period dynamics model were available every 2 s using time domain data, or every 3 s using frequency domain data. The data compatibility problem was also solved in real time, providing corrected sensor measurements every 4 s. If uncertainty corrections for colored residuals are omitted, this rate can be increased to every 0.5 s.
A Bayesian framework for parameter estimation in dynamical models.
Flávio Codeço Coelho
Full Text Available Mathematical models in biology are powerful tools for the study and exploration of complex dynamics. Nevertheless, bringing theoretical results to an agreement with experimental observations involves acknowledging a great deal of uncertainty intrinsic to our theoretical representation of a real system. Proper handling of such uncertainties is key to the successful usage of models to predict experimental or field observations. This problem has been addressed over the years by many tools for model calibration and parameter estimation. In this article we present a general framework for uncertainty analysis and parameter estimation that is designed to handle uncertainties associated with the modeling of dynamic biological systems while remaining agnostic as to the type of model used. We apply the framework to fit an SIR-like influenza transmission model to 7 years of incidence data in three European countries: Belgium, the Netherlands and Portugal.
Probabilistic estimation of the constitutive parameters of polymers
Siviour C.R.
2012-08-01
Full Text Available The Mulliken-Boyce constitutive model predicts the dynamic response of crystalline polymers as a function of strain rate and temperature. This paper describes the Mulliken-Boyce model-based estimation of the constitutive parameters in a Bayesian probabilistic framework. Experimental data from dynamic mechanical analysis and dynamic compression of PVC samples over a wide range of strain rates are analyzed. Both experimental uncertainty and natural variations in the material properties are simultaneously considered as independent and joint distributions; the posterior probability distributions are shown and compared with prior estimates of the material constitutive parameters. Additionally, particular statistical distributions are shown to be effective at capturing the rate and temperature dependence of internal phase transitions in DMA data.
Estimating stellar atmospheric parameters based on Lasso features
Liu, Chuan-Xing; Zhang, Pei-Ai; Lu, Yu
2014-04-01
With the rapid development of large scale sky surveys like the Sloan Digital Sky Survey (SDSS), GAIA and LAMOST (Guoshoujing telescope), stellar spectra can be obtained on an ever-increasing scale. Therefore, it is necessary to estimate stellar atmospheric parameters such as Teff, log g and [Fe/H] automatically to achieve the scientific goals and make full use of the potential value of these observations. Feature selection plays a key role in the automatic measurement of atmospheric parameters. We propose to use the least absolute shrinkage selection operator (Lasso) algorithm to select features from stellar spectra. Feature selection can reduce redundancy in spectra, alleviate the influence of noise, improve calculation speed and enhance the robustness of the estimation system. Based on the extracted features, stellar atmospheric parameters are estimated by the support vector regression model. Three typical schemes are evaluated on spectral data from both the ELODIE library and SDSS. Experimental results show the potential performance to a certain degree. In addition, results show that our method is stable when applied to different spectra.
Estimating Hydraulic Parameters When Poroelastic Effects Are Significant
Berg, S.J.; Hsieh, P.A.; Illman, W.A.
2011-01-01
For almost 80 years, deformation-induced head changes caused by poroelastic effects have been observed during pumping tests in multilayered aquifer-aquitard systems. As water in the aquifer is released from compressive storage during pumping, the aquifer is deformed both in the horizontal and vertical directions. This deformation in the pumped aquifer causes deformation in the adjacent layers, resulting in changes in pore pressure that may produce drawdown curves that differ significantly from those predicted by traditional groundwater theory. Although these deformation-induced head changes have been analyzed in several studies by poroelasticity theory, there are at present no practical guidelines for the interpretation of pumping test data influenced by these effects. To investigate the impact that poroelastic effects during pumping tests have on the estimation of hydraulic parameters, we generate synthetic data for three different aquifer-aquitard settings using a poroelasticity model, and then analyze the synthetic data using type curves and parameter estimation techniques, both of which are based on traditional groundwater theory and do not account for poroelastic effects. Results show that even when poroelastic effects result in significant deformation-induced head changes, it is possible to obtain reasonable estimates of hydraulic parameters using methods based on traditional groundwater theory, as long as pumping is sufficiently long so that deformation-induced effects have largely dissipated. ?? 2011 The Author(s). Journal compilation ?? 2011 National Ground Water Association.
Alessandro Barbiero
2014-01-01
Full Text Available In many statistical applications, it is often necessary to obtain an interval estimate for an unknown proportion or probability or, more generally, for a parameter whose natural space is the unit interval. The customary approximate two-sided confidence interval for such a parameter, based on some version of the central limit theorem, is known to be unsatisfactory when its true value is close to zero or one or when the sample size is small. A possible way to tackle this issue is the transformation of the data through a proper function that is able to make the approximation to the normal distribution less coarse. In this paper, we study the application of several of these transformations to the context of the estimation of the reliability parameter for stress-strength models, with a special focus on Poisson distribution. From this work, some practical hints emerge on which transformation may more efficiently improve standard confidence intervals in which scenarios.
Akatsuki eKimura
2015-03-01
Full Text Available Construction of quantitative models is a primary goal of quantitative biology, which aims to understand cellular and organismal phenomena in a quantitative manner. In this article, we introduce optimization procedures to search for parameters in a quantitative model that can reproduce experimental data. The aim of optimization is to minimize the sum of squared errors (SSE in a prediction or to maximize likelihood. A (local maximum of likelihood or (local minimum of the SSE can efficiently be identified using gradient approaches. Addition of a stochastic process enables us to identify the global maximum/minimum without becoming trapped in local maxima/minima. Sampling approaches take advantage of increasing computational power to test numerous sets of parameters in order to determine the optimum set. By combining Bayesian inference with gradient or sampling approaches, we can estimate both the optimum parameters and the form of the likelihood function related to the parameters. Finally, we introduce four examples of research that utilize parameter optimization to obtain biological insights from quantified data: transcriptional regulation, bacterial chemotaxis, morphogenesis, and cell cycle regulation. With practical knowledge of parameter optimization, cell and developmental biologists can develop realistic models that reproduce their observations and thus, obtain mechanistic insights into phenomena of interest.
Wang, Guoyu; Houkes, Zweitze; Ji, Guangrong; Zheng, Bing; Li, Xin
2003-01-01
This paper presents a new algorithm for estimation-based range image segmentation. Aiming at surface-primitive extraction from range data, we focus on the reliability of the primitive representation in the process of region estimation. We introduce an optimal description of surface primitives, by wh
Mohanty, Sankhya; Hattel, Jesper Henri
2015-01-01
Selective laser melting is yet to become a standardized industrial manufacturing technique. The process continues to suffer from defects such as distortions, residual stresses, localized deformations and warpage caused primarily due to the localized heating, rapid cooling and high temperature...... gradients that occur during the process. While process monitoring and control of selective laser melting is an active area of research, establishing the reliability and robustness of the process still remains a challenge.In this paper, a methodology for generating reliable, optimized scanning paths...... and process parameters for selective laser melting of a standard sample is introduced. The processing of the sample is simulated by sequentially coupling a calibrated 3D pseudo-analytical thermal model with a 3D finite element mechanical model.The optimized processing parameters are subjected to a Monte Carlo...
Estimating Reliability of Disturbances in Satellite Time Series Data Based on Statistical Analysis
Zhou, Z.-G.; Tang, P.; Zhou, M.
2016-06-01
Normally, the status of land cover is inherently dynamic and changing continuously on temporal scale. However, disturbances or abnormal changes of land cover — caused by such as forest fire, flood, deforestation, and plant diseases — occur worldwide at unknown times and locations. Timely detection and characterization of these disturbances is of importance for land cover monitoring. Recently, many time-series-analysis methods have been developed for near real-time or online disturbance detection, using satellite image time series. However, the detection results were only labelled with "Change/ No change" by most of the present methods, while few methods focus on estimating reliability (or confidence level) of the detected disturbances in image time series. To this end, this paper propose a statistical analysis method for estimating reliability of disturbances in new available remote sensing image time series, through analysis of full temporal information laid in time series data. The method consists of three main steps. (1) Segmenting and modelling of historical time series data based on Breaks for Additive Seasonal and Trend (BFAST). (2) Forecasting and detecting disturbances in new time series data. (3) Estimating reliability of each detected disturbance using statistical analysis based on Confidence Interval (CI) and Confidence Levels (CL). The method was validated by estimating reliability of disturbance regions caused by a recent severe flooding occurred around the border of Russia and China. Results demonstrated that the method can estimate reliability of disturbances detected in satellite image with estimation error less than 5% and overall accuracy up to 90%.
Parameter estimation in a spatial unit root autoregressive model
Baran, Sándor
2011-01-01
Spatial autoregressive model $X_{k,\\ell}=\\alpha X_{k-1,\\ell}+\\beta X_{k,\\ell-1}+\\gamma X_{k-1,\\ell-1}+\\epsilon_{k,\\ell}$ is investigated in the unit root case, that is when the parameters are on the boundary of the domain of stability that forms a tetrahedron with vertices $(1,1,-1), \\ (1,-1,1),\\ (-1,1,1)$ and $(-1,-1,-1)$. It is shown that the limiting distribution of the least squares estimator of the parameters is normal and the rate of convergence is $n$ when the parameters are in the faces or on the edges of the tetrahedron, while on the vertices the rate is $n^{3/2}$.
Genetic Algorithm-based Affine Parameter Estimation for Shape Recognition
Yuxing Mao
2014-06-01
Full Text Available Shape recognition is a classically difficult problem because of the affine transformation between two shapes. The current study proposes an affine parameter estimation method for shape recognition based on a genetic algorithm (GA. The contributions of this study are focused on the extraction of affine-invariant features, the individual encoding scheme, and the fitness function construction policy for a GA. First, the affine-invariant characteristics of the centroid distance ratios (CDRs of any two opposite contour points to the barycentre are analysed. Using different intervals along the azimuth angle, the different numbers of CDRs of two candidate shapes are computed as representations of the shapes, respectively. Then, the CDRs are selected based on predesigned affine parameters to construct the fitness function. After that, a GA is used to search for the affine parameters with optimal matching between candidate shapes, which serve as actual descriptions of the affine transformation between the shapes. Finally, the CDRs are resampled based on the estimated parameters to evaluate the similarity of the shapes for classification. The experimental results demonstrate the robust performance of the proposed method in shape recognition with translation, scaling, rotation and distortion.
Accelerated gravitational wave parameter estimation with reduced order modeling.
Canizares, Priscilla; Field, Scott E; Gair, Jonathan; Raymond, Vivien; Smith, Rory; Tiglio, Manuel
2015-02-20
Inferring the astrophysical parameters of coalescing compact binaries is a key science goal of the upcoming advanced LIGO-Virgo gravitational-wave detector network and, more generally, gravitational-wave astronomy. However, current approaches to parameter estimation for these detectors require computationally expensive algorithms. Therefore, there is a pressing need for new, fast, and accurate Bayesian inference techniques. In this Letter, we demonstrate that a reduced order modeling approach enables rapid parameter estimation to be performed. By implementing a reduced order quadrature scheme within the LIGO Algorithm Library, we show that Bayesian inference on the 9-dimensional parameter space of nonspinning binary neutron star inspirals can be sped up by a factor of ∼30 for the early advanced detectors' configurations (with sensitivities down to around 40 Hz) and ∼70 for sensitivities down to around 20 Hz. This speedup will increase to about 150 as the detectors improve their low-frequency limit to 10 Hz, reducing to hours analyses which could otherwise take months to complete. Although these results focus on interferometric gravitational wave detectors, the techniques are broadly applicable to any experiment where fast Bayesian analysis is desirable.
Maximum-likelihood fits to histograms for improved parameter estimation
Fowler, Joseph W
2013-01-01
Straightforward methods for adapting the familiar chi^2 statistic to histograms of discrete events and other Poisson distributed data generally yield biased estimates of the parameters of a model. The bias can be important even when the total number of events is large. For the case of estimating a microcalorimeter's energy resolution at 6 keV from the observed shape of the Mn K-alpha fluorescence spectrum, a poor choice of chi^2 can lead to biases of at least 10% in the estimated resolution when up to thousands of photons are observed. The best remedy is a Poisson maximum-likelihood fit, through a simple modification of the standard Levenberg-Marquardt algorithm for chi^2 minimization. Where the modification is not possible, another approach allows iterative approximation of the maximum-likelihood fit.
Estimation of multiexponential fluorescence decay parameters using compressive sensing.
Yang, Sejung; Lee, Joohyun; Lee, Youmin; Lee, Minyung; Lee, Byung-Uk
2015-09-01
Fluorescence lifetime imaging microscopy (FLIM) is a microscopic imaging technique to present an image of fluorophore lifetimes. It circumvents the problems of typical imaging methods such as intensity attenuation from depth since a lifetime is independent of the excitation intensity or fluorophore concentration. The lifetime is estimated from the time sequence of photon counts observed with signal-dependent noise, which has a Poisson distribution. Conventional methods usually estimate single or biexponential decay parameters. However, a lifetime component has a distribution or width, because the lifetime depends on macromolecular conformation or inhomogeneity. We present a novel algorithm based on a sparse representation which can estimate the distribution of lifetime. We verify the enhanced performance through simulations and experiments.
Learn-As-You-Go Acceleration of Cosmological Parameter Estimates
Aslanyan, Grigor; Price, Layne C
2015-01-01
Cosmological analyses can be accelerated by approximating slow calculations using a training set, which is either precomputed or generated dynamically. However, this approach is only safe if the approximations are well understood and controlled. This paper surveys issues associated with the use of machine-learning based emulation strategies for accelerating cosmological parameter estimation. We describe a learn-as-you-go algorithm that is implemented in the Cosmo++ code and (1) trains the emulator while simultaneously estimating posterior probabilities; (2) identifies unreliable estimates, computing the exact numerical likelihoods if necessary; and (3) progressively learns and updates the error model as the calculation progresses. We explicitly describe and model the emulation error and show how this can be propagated into the posterior probabilities. We apply these techniques to the Planck likelihood and the calculation of $\\Lambda$CDM posterior probabilities. The computation is significantly accelerated wit...
Chloramine demand estimation using surrogate chemical and microbiological parameters.
Moradi, Sina; Liu, Sanly; Chow, Christopher W K; van Leeuwen, John; Cook, David; Drikas, Mary; Amal, Rose
2017-07-01
A model is developed to enable estimation of chloramine demand in full scale drinking water supplies based on chemical and microbiological factors that affect chloramine decay rate via nonlinear regression analysis method. The model is based on organic character (specific ultraviolet absorbance (SUVA)) of the water samples and a laboratory measure of the microbiological (Fm) decay of chloramine. The applicability of the model for estimation of chloramine residual (and hence chloramine demand) was tested on several waters from different water treatment plants in Australia through statistical test analysis between the experimental and predicted data. Results showed that the model was able to simulate and estimate chloramine demand at various times in real drinking water systems. To elucidate the loss of chloramine over the wide variation of water quality used in this study, the model incorporates both the fast and slow chloramine decay pathways. The significance of estimated fast and slow decay rate constants as the kinetic parameters of the model for three water sources in Australia was discussed. It was found that with the same water source, the kinetic parameters remain the same. This modelling approach has the potential to be used by water treatment operators as a decision support tool in order to manage chloramine disinfection. Copyright © 2017. Published by Elsevier B.V.
Malone, Ailish
2012-02-01
The aims of this study were to validate a computerised method to detect muscle activity from surface electromyography (SEMG) signals in gait in patients with cervical spondylotic myelopathy (CSM), and to evaluate the test-retest reliability of the activation times designated by this method. SEMG signals were recorded from rectus femoris (RF), biceps femoris (BF), tibialis anterior (TA), and medial gastrocnemius (MG), during gait in 12 participants with CSM on two separate test days. Four computerised activity detection methods, based on the Teager-Kaiser Energy Operator (TKEO), were applied to a subset of signals and compared to visual interpretation of muscle activation. The most accurate method was then applied to all signals for evaluation of test-retest reliability. A detection method based on a combined slope and amplitude threshold showed the highest agreement (87.5%) with visual interpretation. With respect to reliability, the standard error of measurement (SEM) of the timing of RF, TA and MG between test days was 5.5% stride duration or less, while the SEM of BF was 9.4%. The timing parameters of RF, TA and MG designated by this method were considered sufficiently reliable for use in clinical practice, however the reliability of BF was questionable.
Basic MR sequence parameters systematically bias automated brain volume estimation
Haller, Sven [University of Geneva, Faculty of Medicine, Geneva (Switzerland); Affidea Centre de Diagnostique Radiologique de Carouge CDRC, Geneva (Switzerland); Falkovskiy, Pavel; Roche, Alexis; Marechal, Benedicte [Siemens Healthcare HC CEMEA SUI DI BM PI, Advanced Clinical Imaging Technology, Lausanne (Switzerland); University Hospital (CHUV), Department of Radiology, Lausanne (Switzerland); Meuli, Reto [University Hospital (CHUV), Department of Radiology, Lausanne (Switzerland); Thiran, Jean-Philippe [LTS5, Ecole Polytechnique Federale de Lausanne, Lausanne (Switzerland); Krueger, Gunnar [Siemens Medical Solutions USA, Inc., Boston, MA (United States); Lovblad, Karl-Olof [University of Geneva, Faculty of Medicine, Geneva (Switzerland); University Hospitals of Geneva, Geneva (Switzerland); Kober, Tobias [Siemens Healthcare HC CEMEA SUI DI BM PI, Advanced Clinical Imaging Technology, Lausanne (Switzerland); LTS5, Ecole Polytechnique Federale de Lausanne, Lausanne (Switzerland)
2016-11-15
Automated brain MRI morphometry, including hippocampal volumetry for Alzheimer disease, is increasingly recognized as a biomarker. Consequently, a rapidly increasing number of software tools have become available. We tested whether modifications of simple MR protocol parameters typically used in clinical routine systematically bias automated brain MRI segmentation results. The study was approved by the local ethical committee and included 20 consecutive patients (13 females, mean age 75.8 ± 13.8 years) undergoing clinical brain MRI at 1.5 T for workup of cognitive decline. We compared three 3D T1 magnetization prepared rapid gradient echo (MPRAGE) sequences with the following parameter settings: ADNI-2 1.2 mm iso-voxel, no image filtering, LOCAL- 1.0 mm iso-voxel no image filtering, LOCAL+ 1.0 mm iso-voxel with image edge enhancement. Brain segmentation was performed by two different and established analysis tools, FreeSurfer and MorphoBox, using standard parameters. Spatial resolution (1.0 versus 1.2 mm iso-voxel) and modification in contrast resulted in relative estimated volume difference of up to 4.28 % (p < 0.001) in cortical gray matter and 4.16 % (p < 0.01) in hippocampus. Image data filtering resulted in estimated volume difference of up to 5.48 % (p < 0.05) in cortical gray matter. A simple change of MR parameters, notably spatial resolution, contrast, and filtering, may systematically bias results of automated brain MRI morphometry of up to 4-5 %. This is in the same range as early disease-related brain volume alterations, for example, in Alzheimer disease. Automated brain segmentation software packages should therefore require strict MR parameter selection or include compensatory algorithms to avoid MR parameter-related bias of brain morphometry results. (orig.)
Optimizing reliability, maintainability and testability parameters of equipment based on GSPN
Yongcheng Xu
2015-01-01
Reliability, maintainability and testability (RMT) are important properties of equipment, since they have important influ-ence on operational availability and life cycle costs (LCC). There-fore, weighting and optimizing the three properties are of great significance. A new approach for optimization of RMT parameters is proposed. First of al , the model for the equipment operation pro-cess is established based on the generalized stochastic Petri nets (GSPN) theory. Then, by solving the GSPN model, the quantitative relationship between operational availability and RMT parameters is obtained. Afterwards, taking history data of similar equipment and operation process into consideration, a cost model of design, manufacture and maintenance is developed. Based on operational availability, the cost model and parameters ranges, an optimization model of RMT parameters is built. Final y, the effectiveness and practicability of this approach are validated through an example.
Tiberi, Lara; Costa, Giovanni
2017-04-01
The possibility to directly associate the damages to the ground motion parameters is always a great challenge, in particular for civil protections. Indeed a ground motion parameter, estimated in near real time that can express the damages occurred after an earthquake, is fundamental to arrange the first assistance after an event. The aim of this work is to contribute to the estimation of the ground motion parameter that better describes the observed intensity, immediately after an event. This can be done calculating for each ground motion parameter estimated in a near real time mode a regression law which correlates the above-mentioned parameter to the observed macro-seismic intensity. This estimation is done collecting high quality accelerometric data in near field, filtering them at different frequency steps. The regression laws are calculated using two different techniques: the non linear least-squares (NLLS) Marquardt-Levenberg algorithm and the orthogonal distance methodology (ODR). The limits of the first methodology are the needed of initial values for the parameters a and b (set 1.0 in this study), and the constraint that the independent variable must be known with greater accuracy than the dependent variable. While the second algorithm is based on the estimation of the errors perpendicular to the line, rather than just vertically. The vertical errors are just the errors in the 'y' direction, so only for the dependent variable whereas the perpendicular errors take into account errors for both the variables, the dependent and the independent. This makes possible also to directly invert the relation, so the a and b values can be used also to express the gmps as function of I. For each law the standard deviation and R2 value are estimated in order to test the quality and the reliability of the found relation. The Amatrice earthquake of 24th August of 2016 is used as case of study to test the goodness of the calculated regression laws.
The reliable solution and computation time of variable parameters logistic model
Wang, Pengfei; Pan, Xinnong
2017-04-01
The study investigates the reliable computation time (RCT, termed as T c) by applying a double-precision computation of a variable parameters logistic map (VPLM). Firstly, by using the proposed method, we obtain the reliable solutions for the logistic map. Secondly, we construct 10,000 samples of reliable experiments from a time-dependent non-stationary parameters VPLM and then calculate the mean T c. The results indicate that, for each different initial value, the T cs of the VPLM are generally different. However, the mean T c trends to a constant value when the sample number is large enough. The maximum, minimum, and probable distribution functions of T c are also obtained, which can help us to identify the robustness of applying a nonlinear time series theory to forecasting by using the VPLM output. In addition, the T c of the fixed parameter experiments of the logistic map is obtained, and the results suggest that this T c matches the theoretical formula-predicted value.
Combined Estimation of Hydrogeologic Conceptual Model and Parameter Uncertainty
Meyer, Philip D.; Ye, Ming; Neuman, Shlomo P.; Cantrell, Kirk J.
2004-03-01
The objective of the research described in this report is the development and application of a methodology for comprehensively assessing the hydrogeologic uncertainties involved in dose assessment, including uncertainties associated with conceptual models, parameters, and scenarios. This report describes and applies a statistical method to quantitatively estimate the combined uncertainty in model predictions arising from conceptual model and parameter uncertainties. The method relies on model averaging to combine the predictions of a set of alternative models. Implementation is driven by the available data. When there is minimal site-specific data the method can be carried out with prior parameter estimates based on generic data and subjective prior model probabilities. For sites with observations of system behavior (and optionally data characterizing model parameters), the method uses model calibration to update the prior parameter estimates and model probabilities based on the correspondence between model predictions and site observations. The set of model alternatives can contain both simplified and complex models, with the requirement that all models be based on the same set of data. The method was applied to the geostatistical modeling of air permeability at a fractured rock site. Seven alternative variogram models of log air permeability were considered to represent data from single-hole pneumatic injection tests in six boreholes at the site. Unbiased maximum likelihood estimates of variogram and drift parameters were obtained for each model. Standard information criteria provided an ambiguous ranking of the models, which would not justify selecting one of them and discarding all others as is commonly done in practice. Instead, some of the models were eliminated based on their negligibly small updated probabilities and the rest were used to project the measured log permeabilities by kriging onto a rock volume containing the six boreholes. These four
How Many Sleep Diary Entries Are Needed to Reliably Estimate Adolescent Sleep?
Short, Michelle A; Arora, Teresa; Gradisar, Michael; Taheri, Shahrad; Carskadon, Mary A
2017-03-01
To investigate (1) how many nights of sleep diary entries are required for reliable estimates of five sleep-related outcomes (bedtime, wake time, sleep onset latency [SOL], sleep duration, and wake after sleep onset [WASO]) and (2) the test-retest reliability of sleep diary estimates of school night sleep across 12 weeks. Data were drawn from four adolescent samples (Australia [n = 385], Qatar [n = 245], United Kingdom [n = 770], and United States [n = 366]), who provided 1766 eligible sleep diary weeks for reliability analyses. We performed reliability analyses for each cohort using complete data (7 days), one to five school nights, and one to two weekend nights. We also performed test-retest reliability analyses on 12-week sleep diary data available from a subgroup of 55 US adolescents. Intraclass correlation coefficients for bedtime, SOL, and sleep duration indicated good-to-excellent reliability from five weekday nights of sleep diary entries across all adolescent cohorts. Four school nights was sufficient for wake times in the Australian and UK samples, but not the US or Qatari samples. Only Australian adolescents showed good reliability for two weekend nights of bedtime reports; estimates of SOL were adequate for UK adolescents based on two weekend nights. WASO was not reliably estimated using 1 week of sleep diaries. We observed excellent test-rest reliability across 12 weeks of sleep diary data in a subsample of US adolescents. We recommend at least five weekday nights of sleep dairy entries to be made when studying adolescent bedtimes, SOL, and sleep duration. Adolescent sleep patterns were stable across 12 consecutive school weeks.
Improving Distribution Resiliency with Microgrids and State and Parameter Estimation
Tuffner, Francis K. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Williams, Tess L. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Schneider, Kevin P. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Elizondo, Marcelo A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Sun, Yannan [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Liu, Chen-Ching [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Xu, Yin [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Gourisetti, Sri Nikhil Gup [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
2015-09-30
Modern society relies on low-cost reliable electrical power, both to maintain industry, as well as provide basic social services to the populace. When major disturbances occur, such as Hurricane Katrina or Hurricane Sandy, the nation’s electrical infrastructure can experience significant outages. To help prevent the spread of these outages, as well as facilitating faster restoration after an outage, various aspects of improving the resiliency of the power system are needed. Two such approaches are breaking the system into smaller microgrid sections, and to have improved insight into the operations to detect failures or mis-operations before they become critical. Breaking the system into smaller sections of microgrid islands, power can be maintained in smaller areas where distribution generation and energy storage resources are still available, but bulk power generation is no longer connected. Additionally, microgrid systems can maintain service to local pockets of customers when there has been extensive damage to the local distribution system. However, microgrids are grid connected a majority of the time and implementing and operating a microgrid is much different than when islanded. This report discusses work conducted by the Pacific Northwest National Laboratory that developed improvements for simulation tools to capture the characteristics of microgrids and how they can be used to develop new operational strategies. These operational strategies reduce the cost of microgrid operation and increase the reliability and resilience of the nation’s electricity infrastructure. In addition to the ability to break the system into microgrids, improved observability into the state of the distribution grid can make the power system more resilient. State estimation on the transmission system already provides great insight into grid operations and detecting abnormal conditions by leveraging existing measurements. These transmission-level approaches are expanded to using
A Data-Driven Reliability Estimation Approach for Phased-Mission Systems
Hua-Feng He
2014-01-01
Full Text Available We attempt to address the issues associated with reliability estimation for phased-mission systems (PMS and present a novel data-driven approach to achieve reliability estimation for PMS using the condition monitoring information and degradation data of such system under dynamic operating scenario. In this sense, this paper differs from the existing methods only considering the static scenario without using the real-time information, which aims to estimate the reliability for a population but not for an individual. In the presented approach, to establish a linkage between the historical data and real-time information of the individual PMS, we adopt a stochastic filtering model to model the phase duration and obtain the updated estimation of the mission time by Bayesian law at each phase. At the meanwhile, the lifetime of PMS is estimated from degradation data, which are modeled by an adaptive Brownian motion. As such, the mission reliability can be real time obtained through the estimated distribution of the mission time in conjunction with the estimated lifetime distribution. We demonstrate the usefulness of the developed approach via a numerical example.
Novel metaheuristic for parameter estimation in nonlinear dynamic biological systems
Banga Julio R
2006-11-01
Full Text Available Abstract Background We consider the problem of parameter estimation (model calibration in nonlinear dynamic models of biological systems. Due to the frequent ill-conditioning and multi-modality of many of these problems, traditional local methods usually fail (unless initialized with very good guesses of the parameter vector. In order to surmount these difficulties, global optimization (GO methods have been suggested as robust alternatives. Currently, deterministic GO methods can not solve problems of realistic size within this class in reasonable computation times. In contrast, certain types of stochastic GO methods have shown promising results, although the computational cost remains large. Rodriguez-Fernandez and coworkers have presented hybrid stochastic-deterministic GO methods which could reduce computation time by one order of magnitude while guaranteeing robustness. Our goal here was to further reduce the computational effort without loosing robustness. Results We have developed a new procedure based on the scatter search methodology for nonlinear optimization of dynamic models of arbitrary (or even unknown structure (i.e. black-box models. In this contribution, we describe and apply this novel metaheuristic, inspired by recent developments in the field of operations research, to a set of complex identification problems and we make a critical comparison with respect to the previous (above mentioned successful methods. Conclusion Robust and efficient methods for parameter estimation are of key importance in systems biology and related areas. The new metaheuristic presented in this paper aims to ensure the proper solution of these problems by adopting a global optimization approach, while keeping the computational effort under reasonable values. This new metaheuristic was applied to a set of three challenging parameter estimation problems of nonlinear dynamic biological systems, outperforming very significantly all the methods previously
Estimating unknown parameters in haemophilia using expert judgement elicitation.
Fischer, K; Lewandowski, D; Janssen, M P
2013-09-01
The increasing attention to healthcare costs and treatment efficiency has led to an increasing demand for quantitative data concerning patient and treatment characteristics in haemophilia. However, most of these data are difficult to obtain. The aim of this study was to use expert judgement elicitation (EJE) to estimate currently unavailable key parameters for treatment models in severe haemophilia A. Using a formal expert elicitation procedure, 19 international experts provided information on (i) natural bleeding frequency according to age and onset of bleeding, (ii) treatment of bleeds, (iii) time needed to control bleeding after starting secondary prophylaxis, (iv) dose requirements for secondary prophylaxis according to onset of bleeding, and (v) life-expectancy. For each parameter experts provided their quantitative estimates (median, P10, P90), which were combined using a graphical method. In addition, information was obtained concerning key decision parameters of haemophilia treatment. There was most agreement between experts regarding bleeding frequencies for patients treated on demand with an average onset of joint bleeding (1.7 years): median 12 joint bleeds per year (95% confidence interval 0.9-36) for patients ≤ 18, and 11 (0.8-61) for adult patients. Less agreement was observed concerning estimated effective dose for secondary prophylaxis in adults: median 2000 IU every other day The majority (63%) of experts expected that a single minor joint bleed could cause irreversible damage, and would accept up to three minor joint bleeds or one trauma related joint bleed annually on prophylaxis. Expert judgement elicitation allowed structured capturing of quantitative expert estimates. It generated novel data to be used in computer modelling, clinical care, and trial design. © 2013 John Wiley & Sons Ltd.
Luo, Shezhou; Chen, Jing M; Wang, Cheng; Xi, Xiaohuan; Zeng, Hongcheng; Peng, Dailiang; Li, Dong
2016-05-30
Vegetation leaf area index (LAI), height, and aboveground biomass are key biophysical parameters. Corn is an important and globally distributed crop, and reliable estimations of these parameters are essential for corn yield forecasting, health monitoring and ecosystem modeling. Light Detection and Ranging (LiDAR) is considered an effective technology for estimating vegetation biophysical parameters. However, the estimation accuracies of these parameters are affected by multiple factors. In this study, we first estimated corn LAI, height and biomass (R2 = 0.80, 0.874 and 0.838, respectively) using the original LiDAR data (7.32 points/m2), and the results showed that LiDAR data could accurately estimate these biophysical parameters. Second, comprehensive research was conducted on the effects of LiDAR point density, sampling size and height threshold on the estimation accuracy of LAI, height and biomass. Our findings indicated that LiDAR point density had an important effect on the estimation accuracy for vegetation biophysical parameters, however, high point density did not always produce highly accurate estimates, and reduced point density could deliver reasonable estimation results. Furthermore, the results showed that sampling size and height threshold were additional key factors that affect the estimation accuracy of biophysical parameters. Therefore, the optimal sampling size and the height threshold should be determined to improve the estimation accuracy of biophysical parameters. Our results also implied that a higher LiDAR point density, larger sampling size and height threshold were required to obtain accurate corn LAI estimation when compared with height and biomass estimations. In general, our results provide valuable guidance for LiDAR data acquisition and estimation of vegetation biophysical parameters using LiDAR data.
Estimating Between-Person and Within-Person Subscore Reliability with Profile Analysis.
Bulut, Okan; Davison, Mark L; Rodriguez, Michael C
2017-01-01
Subscores are of increasing interest in educational and psychological testing due to their diagnostic function for evaluating examinees' strengths and weaknesses within particular domains of knowledge. Previous studies about the utility of subscores have mostly focused on the overall reliability of individual subscores and ignored the fact that subscores should be distinct and have added value over the total score. This study introduces a profile reliability approach that partitions the overall subscore reliability into within-person and between-person subscore reliability. The estimation of between-person reliability and within-person reliability coefficients is demonstrated using subscores from number-correct scoring, unidimensional and multidimensional item response theory scoring, and augmented scoring approaches via a simulation study and a real data study. The effects of various testing conditions, such as subtest length, correlations among subscores, and the number of subtests, are examined. Results indicate that there is a substantial trade-off between within-person and between-person reliability of subscores. Profile reliability coefficients can be useful in determining the extent to which subscores provide distinct and reliable information under various testing conditions.
Campbell, D A; Chkrebtii, O
2013-12-01
Statistical inference for biochemical models often faces a variety of characteristic challenges. In this paper we examine state and parameter estimation for the JAK-STAT intracellular signalling mechanism, which exemplifies the implementation intricacies common in many biochemical inference problems. We introduce an extension to the Generalized Smoothing approach for estimating delay differential equation models, addressing selection of complexity parameters, choice of the basis system, and appropriate optimization strategies. Motivated by the JAK-STAT system, we further extend the generalized smoothing approach to consider a nonlinear observation process with additional unknown parameters, and highlight how the approach handles unobserved states and unevenly spaced observations. The methodology developed is generally applicable to problems of estimation for differential equation models with delays, unobserved states, nonlinear observation processes, and partially observed histories.
Parameter Estimation of Induction Motors Using Water Cycle Optimization
M. Yazdani-Asrami
2013-12-01
Full Text Available This paper presents the application of recently introduced water cycle algorithm (WCA to optimize the parameters of exact and approximate induction motor from the nameplate data. Considering that induction motors are widely used in industrial applications, these parameters have a significant effect on the accuracy and efficiency of the motors and, ultimately, the overall system performance. Therefore, it is essential to develop algorithms for the parameter estimation of the induction motor. The fundamental concepts and ideas which underlie the proposed method is inspired from nature and based on the observation of water cycle process and how rivers and streams ﬂow to the sea in the real world. The objective function is defined as the minimization of the real values of the relative error between the measured and estimated torques of the machine in different slip points. The proposed WCA approach has been applied on two different sample motors. Results of the proposed method have been compared with other previously applied Meta heuristic methods on the problem, which show the feasibility and the fast convergence of the proposed approach.
Effect of noncircularity of experimental beam on CMB parameter estimation
Das, Santanu; Paulson, Sonu Tabitha
2015-01-01
Measurement of Cosmic Microwave Background (CMB) anisotropies has been playing a lead role in precision cosmology by providing some of the tightest constrains on cosmological models and parameters. However, precision can only be meaningful when all major systematic effects are taken into account. Non-circular beams in CMB experiments can cause large systematic deviation in the angular power spectrum, not only by modifying the measurement at a given multipole, but also introducing coupling between different multipoles through a deterministic bias matrix. Here we add a mechanism for emulating the effect of a full bias matrix to the Planck likelihood code through the parameter estimation code SCoPE. We show that if the angular power spectrum was measured with a non-circular beam, the assumption of circular Gaussian beam or considering only the diagonal part of the bias matrix can lead to huge error in parameter estimation. We demonstrate that, at least for elliptical Gaussian beams, use of scalar beam window fun...
Matched-filtering and parameter estimation of ringdown waveforms
Berti, Emanuele; Cardoso, Vitor; Cavaglia, Marco
2007-01-01
Using recent results from numerical relativity simulations of non-spinning binary black hole mergers we revisit the problem of detecting ringdown waveforms and of estimating the source parameters, considering both LISA and Earth-based interferometers. We find that Advanced LIGO and EGO could detect intermediate-mass black holes of mass up to about 1000 solar masses out to a luminosity distance of a few Gpc. For typical multipolar energy distributions, we show that the single-mode ringdown templates presently used for ringdown searches in the LIGO data stream can produce a significant event loss (> 10% for all detectors in a large interval of black hole masses) and very large parameter estimation errors on the black hole's mass and spin. We estimate that more than 10^6 templates would be needed for a single-stage multi-mode search. Therefore, we recommend a "two stage" search to save on computational costs: single-mode templates can be used for detection, but multi-mode templates or Prony methods should be use...
Temporal Parameters Estimation for Wheelchair Propulsion Using Wearable Sensors
Manoela Ojeda
2014-01-01
Full Text Available Due to lower limb paralysis, individuals with spinal cord injury (SCI rely on their upper limbs for mobility. The prevalence of upper extremity pain and injury is high among this population. We evaluated the performance of three triaxis accelerometers placed on the upper arm, wrist, and under the wheelchair, to estimate temporal parameters of wheelchair propulsion. Twenty-six participants with SCI were asked to push their wheelchair equipped with a SMARTWheel. The estimated stroke number was compared with the criterion from video observations and the estimated push frequency was compared with the criterion from the SMARTWheel. Mean absolute errors (MAE and mean absolute percentage of error (MAPE were calculated. Intraclass correlation coefficients and Bland-Altman plots were used to assess the agreement. Results showed reasonable accuracies especially using the accelerometer placed on the upper arm where the MAPE was 8.0% for stroke number and 12.9% for push frequency. The ICC was 0.994 for stroke number and 0.916 for push frequency. The wrist and seat accelerometer showed lower accuracy with a MAPE for the stroke number of 10.8% and 13.4% and ICC of 0.990 and 0.984, respectively. Results suggested that accelerometers could be an option for monitoring temporal parameters of wheelchair propulsion.
PARAMETER ESTIMATION OF VALVE STICTION USING ANT COLONY OPTIMIZATION
S. Kalaivani
2012-07-01
Full Text Available In this paper, a procedure for quantifying valve stiction in control loops based on ant colony optimization has been proposed. Pneumatic control valves are widely used in the process industry. The control valve contains non-linearities such as stiction, backlash, and deadband that in turn cause oscillations in the process output. Stiction is one of the long-standing problems and it is the most severe problem in the control valves. Thus the measurement data from an oscillating control loop can be used as a possible diagnostic signal to provide an estimate of the stiction magnitude. Quantification of control valve stiction is still a challenging issue. Prior to doing stiction detection and quantification, it is necessary to choose a suitable model structure to describe control-valve stiction. To understand the stiction phenomenon, the Stenman model is used. Ant Colony Optimization (ACO, an intelligent swarm algorithm, proves effective in various fields. The ACO algorithm is inspired from the natural trail following behaviour of ants. The parameters of the Stenman model are estimated using ant colony optimization, from the input-output data by minimizing the error between the actual stiction model output and the simulated stiction model output. Using ant colony optimization, Stenman model with known nonlinear structure and unknown parameters can be estimated.
Influence Factors on the Value of Reliability Estimators in Marketing Research
2011-01-01
This paper is a literature review, with a conclusion that leaves open many doors for future research. In the first part are reviewed a series of qualitative and quantitative research characteristics. The second part explains briefly the reliability and validity of instruments used in qualitative and quantitative marketing research. The third part of the paper review a series of articles on the estimators of reliability, on their power, on their strengths and weaknesses. The conclusions of the...
Reliability estimation for multiunit nuclear and fossil-fired industrial energy systems
Sullivan, W. G.; Wilson, J. V.; Klepper, O. H.
1977-06-29
As petroleum-based fuels grow increasingly scarce and costly, nuclear energy may become an important alternative source of industrial energy. Initial applications would most likely include a mix of fossil-fired and nuclear sources of process energy. A means for determining the overall reliability of these mixed systems is a fundamental aspect of demonstrating their feasibility to potential industrial users. Reliability data from nuclear and fossil-fired plants are presented, and several methods of applying these data for calculating the reliability of reasonably complex industrial energy supply systems are given. Reliability estimates made under a number of simplifying assumptions indicate that multiple nuclear units or a combination of nuclear and fossil-fired plants could provide adequate reliability to meet industrial requirements for continuity of service.
Reliability-Based Weighting of Visual and Vestibular Cues in Displacement Estimation.
ter Horst, Arjan C; Koppen, Mathieu; Selen, Luc P J; Medendorp, W Pieter
2015-01-01
When navigating through the environment, our brain needs to infer how far we move and in which direction we are heading. In this estimation process, the brain may rely on multiple sensory modalities, including the visual and vestibular systems. Previous research has mainly focused on heading estimation, showing that sensory cues are combined by weighting them in proportion to their reliability, consistent with statistically optimal integration. But while heading estimation could improve with the ongoing motion, due to the constant flow of information, the estimate of how far we move requires the integration of sensory information across the whole displacement. In this study, we investigate whether the brain optimally combines visual and vestibular information during a displacement estimation task, even if their reliability varies from trial to trial. Participants were seated on a linear sled, immersed in a stereoscopic virtual reality environment. They were subjected to a passive linear motion involving visual and vestibular cues with different levels of visual coherence to change relative cue reliability and with cue discrepancies to test relative cue weighting. Participants performed a two-interval two-alternative forced-choice task, indicating which of two sequentially perceived displacements was larger. Our results show that humans adapt their weighting of visual and vestibular information from trial to trial in proportion to their reliability. These results provide evidence that humans optimally integrate visual and vestibular information in order to estimate their body displacement.
Parameter estimation for stochastic hybrid model applied to urban traffic flow estimation
2015-01-01
This study proposes a novel data-based approach for estimating the parameters of a stochastic hybrid model describing the traffic flow in an urban traffic network with signalized intersections. The model represents the evolution of the traffic flow rate, measuring the number of vehicles passing a given location per time unit. This traffic flow rate is described using a mode-dependent first-order autoregressive (AR) stochastic process. The parameters of the AR process take different values dep...
Implementation and Analysis of Probabilistic Methods for Gate-Level Circuit Reliability Estimation
WANG Zhen; JIANG Jianhui; YANG Guang
2007-01-01
The development of VLSI technology results in the dramatically improvement of the performance of integrated circuits. However, it brings more challenges to the aspect of reliability. Integrated circuits become more susceptible to soft errors. Therefore, it is imperative to study the reliability of circuits under the soft error. This paper implements three probabilistic methods (two pass, error propagation probability, and probabilistic transfer matrix) for estimating gate-level circuit reliability on PC. The functions and performance of these methods are compared by experiments using ISCAS85 and 74-series circuits.
An Allocation Scheme for Estimating the Reliability of a Parallel-Series System
Zohra Benkamra
2012-01-01
Full Text Available We give a hybrid two-stage design which can be useful to estimate the reliability of a parallel-series and/or by duality a series-parallel system. When the components' reliabilities are unknown, one can estimate them by sample means of Bernoulli observations. Let T be the total number of observations allowed for the system. When T is fixed, we show that the variance of the system reliability estimate can be lowered by allocation of the sample size T at components' level. This leads to a discrete optimization problem which can be solved sequentially, assuming T is large enough. First-order asymptotic optimality is proved systematically and validated T Monte Carlo simulation.
Bonatto, Charles; Lima, Eliade F
2011-01-01
We present an approach that improves the search for reliable astrophysical parameters (e.g. age, mass, and distance) of differentially-reddened, pre-main sequence-rich star clusters. It involves simulating conditions related to the early-cluster phases, in particular the differential and foreground reddenings, and internal age spread. Given the loose constraints imposed by these factors, the derivation of parameters based only on photometry may be uncertain, especially for the poorly-populated clusters. We consider a wide range of cluster {\\em (i)} mass and {\\em (ii)} age, and different values of {\\em (iii)} distance modulus, {\\em (iv)} differential and {\\em (v)} foreground reddenings. Photometric errors and their relation with magnitude are also taken into account. We also investigate how the presence of unresolved binaries affect the derived parameters. For each set of {\\em (i)} - {\\em (v)} we build the corresponding model Hess diagram, and compute the root mean squared residual with respect to the observed...
Estimating parameters in stochastic systems: A variational Bayesian approach
Vrettas, Michail D.; Cornford, Dan; Opper, Manfred
2011-11-01
This work is concerned with approximate inference in dynamical systems, from a variational Bayesian perspective. When modelling real world dynamical systems, stochastic differential equations appear as a natural choice, mainly because of their ability to model the noise of the system by adding a variation of some stochastic process to the deterministic dynamics. Hence, inference in such processes has drawn much attention. Here a new extended framework is derived that is based on a local polynomial approximation of a recently proposed variational Bayesian algorithm. The paper begins by showing that the new extension of this variational algorithm can be used for state estimation (smoothing) and converges to the original algorithm. However, the main focus is on estimating the (hyper-) parameters of these systems (i.e. drift parameters and diffusion coefficients). The new approach is validated on a range of different systems which vary in dimensionality and non-linearity. These are the Ornstein-Uhlenbeck process, the exact likelihood of which can be computed analytically, the univariate and highly non-linear, stochastic double well and the multivariate chaotic stochastic Lorenz ’63 (3D model). As a special case the algorithm is also applied to the 40 dimensional stochastic Lorenz ’96 system. In our investigation we compare this new approach with a variety of other well known methods, such as the hybrid Monte Carlo, dual unscented Kalman filter, full weak-constraint 4D-Var algorithm and analyse empirically their asymptotic behaviour as a function of observation density or length of time window increases. In particular we show that we are able to estimate parameters in both the drift (deterministic) and the diffusion (stochastic) part of the model evolution equations using our new methods.
Fenglei Qi
2016-01-01
Full Text Available Enzymatic hydrolysis is an integral step in the conversion of lignocellulosic biomass to ethanol. The conversion of cellulose to fermentable sugars in the presence of inhibitors is a complex kinetic problem. In this study, we describe a novel approach to estimating the kinetic parameters underlying this process. This study employs experimental data measuring substrate and enzyme loadings, sugar and acid inhibitions for the production of glucose. Multiple objectives to minimize the difference between model predictions and experimental observations are developed and optimized by adopting multi-objective particle swarm optimization method. Model reliability is assessed by exploring likelihood profile in each parameter space. Compared to previous studies, this approach improved the prediction of sugar yields by reducing the mean squared errors by 34% for glucose and 2.7% for cellobiose, suggesting improved agreement between model predictions and the experimental data. Furthermore, kinetic parameters such as K2IG2, K1IG, K2IG, K1IA, and K3IA are identified as contributors to the model non-identifiability and wide parameter confidence intervals. Model reliability analysis indicates possible ways to reduce model non-identifiability and tighten parameter confidence intervals. These results could help improve the design of lignocellulosic biorefineries by providing higher fidelity predictions of fermentable sugars under inhibitory conditions.
Error estimation and adaptivity for transport problems with uncertain parameters
Sahni, Onkar; Li, Jason; Oberai, Assad
2016-11-01
Stochastic partial differential equations (PDEs) with uncertain parameters and source terms arise in many transport problems. In this study, we develop and apply an adaptive approach based on the variational multiscale (VMS) formulation for discretizing stochastic PDEs. In this approach we employ finite elements in the physical domain and generalize polynomial chaos based spectral basis in the stochastic domain. We demonstrate our approach on non-trivial transport problems where the uncertain parameters are such that the advective and diffusive regimes are spanned in the stochastic domain. We show that the proposed method is effective as a local error estimator in quantifying the element-wise error and in driving adaptivity in the physical and stochastic domains. We will also indicate how this approach may be extended to the Navier-Stokes equations. NSF Award 1350454 (CAREER).
Pedotransfer functions estimating soil hydraulic properties using different soil parameters
Børgesen, Christen Duus; Iversen, Bo Vangsø; Jacobsen, Ole Hørbye;
2008-01-01
Estimates of soil hydraulic properties using pedotransfer functions (PTF) are useful in many studies such as hydrochemical modelling and soil mapping. The objective of this study was to calibrate and test parametric PTFs that predict soil water retention and unsaturated hydraulic conductivity...... parameters. The PTFs are based on neural networks and the Bootstrap method using different sets of predictors and predict the van Genuchten/Mualem parameters. A Danish soil data set (152 horizons) dominated by sandy and sandy loamy soils was used in the development of PTFs to predict the Mualem hydraulic...... of the hydraulic properties of the studied soils. We found that introducing measured water content as a predictor generally gave lower errors for water retention predictions and higher errors for conductivity predictions. The best of the developed PTFs for predicting hydraulic conductivity was tested against PTFs...
Acoustical estimation of parameters of porous road pavement
Valyaev, V. Yu.; Shanin, A. V.
2012-11-01
In the simplest case, porous road pavement of a known thickness is described by such parameters as porosity, tortuosity, and flow resistance. The problem of estimating these parameters is investigated in this paper. An acoustic signal reflected by the pavement is used for this. It is shown that this problem can be solved by an experiment conducted in the time domain (i.e., the pulse response of the media is recorded). The incident sound wave is thrown at a grazing angle to the surface between the pavement and the air to improve penetration into the porous medium. The procedure of computing of the pulse response using the Morse-Ingard model is described in detail.
Estimation of the reconstruction parameters for Atom Probe Tomography
Gault, Baptiste; Stephenson, Leigh T; Moody, Michael P; Muddle, Barry C; Ringer, Simon P
2015-01-01
The application of wide field-of-view detection systems to atom probe experiments emphasizes the importance of careful parameter selection in the tomographic reconstruction of the analysed volume, as the sensitivity to errors rises steeply with increases in analysis dimensions. In this paper, a self-consistent method is presented for the systematic determination of the main reconstruction parameters. In the proposed approach, the compression factor and the field factor are determined using geometrical projections from the desorption images. A 3D Fourier transform is then applied to a series of reconstructions and, comparing to the known material crystallography, the efficiency of the detector is estimated. The final results demonstrate a significant improvement in the accuracy of the reconstructed volumes.
Synchronization and parameter estimations of an uncertain Rikitake system
Aguilar-Ibanez, Carlos, E-mail: caguilar@cic.ipn.m [CIC-IPN, Av. Juan de Dios Batiz s/n, Esq. Manuel Othon de M., Unidad Profesional Adolfo Lopez Mateos, Col. Nueva Industrial Vallejo, Del. Gustavo A. Madero, C.P. 07738, Mexico D.F. (Mexico); Martinez-Guerra, Rafael, E-mail: rguerra@ctrl.cinvestav.m [CINVESTAV-IPN, Departamento de Control Automatico, Av. Instituto Politecnico Nacional 2508, Col. San Pedro Zacatenco, Mexico, D. F., 07360 (Mexico); Aguilar-Lopez, Ricardo [CINVESTAV-IPN, Departamento de Biotecnologia y Bioingenieria (Mexico); Mata-Machuca, Juan L. [CINVESTAV-IPN, Departamento de Control Automatico, Av. Instituto Politecnico Nacional 2508, Col. San Pedro Zacatenco, Mexico, D. F., 07360 (Mexico)
2010-08-02
In this Letter we address the synchronization and parameter estimation of the uncertain Rikitake system, under the assumption the state is partially known. To this end we use the master/slave scheme in conjunction with the adaptive control technique. Our control approach consists of proposing a slave system which has to follow asymptotically the uncertain Rikitake system, refereed as the master system. The gains of the slave system are adjusted continually according to a convenient adaptation control law, until the measurable output errors converge to zero. The convergence analysis is carried out by using the Barbalat's Lemma. Under this context, uncertainty means that although the system structure is known, only a partial knowledge of the corresponding parameter values is available.
Estimating seismic demand parameters using the endurance time method
Ramin MADARSHAHIAN; Homayoon ESTEKANCHI; Akbar MAHVASHMOHAMMADI
2011-01-01
The endurance time (ET) method is a time history based dynamic analysis in which structures are subjected to gradually intensifying excitations and their performances are judged based on their responses at various excitation levels.Using this method,the computational effort required for estimating probable seismic demand parameters can be reduced by an order of magnitude.Calculation of the maximum displacement or target displacement is a basic requirement for estimating performance based on structural design.The purpose of this paper is to compare the results of the nonlinear ET method with the nonlinear static pushover (NSP) method of FEMA 356 by evaluating performances and target displacements of steel frames.This study will lead to a deeper insight into the capabilities and limitations of the ET method.The results are further compared with those of the standard nonlinear response history analysis.We conclude that results from the ET analysis are in proper agreement with those from standard procedures.
Estimation of Aircraft Nonlinear Unsteady Parameters From Wind Tunnel Data
Klein, Vladislav; Murphy, Patrick C.
1998-01-01
Aerodynamic equations were formulated for an aircraft in one-degree-of-freedom large amplitude motion about each of its body axes. The model formulation based on indicial functions separated the resulting aerodynamic forces and moments into static terms, purely rotary terms and unsteady terms. Model identification from experimental data combined stepwise regression and maximum likelihood estimation in a two-stage optimization algorithm that can identify the unsteady term and rotary term if necessary. The identification scheme was applied to oscillatory data in two examples. The model identified from experimental data fit the data well, however, some parameters were estimated with limited accuracy. The resulting model was a good predictor for oscillatory and ramp input data.
Estimation of growth parameters using a nonlinear mixed Gompertz model.
Wang, Z; Zuidhof, M J
2004-06-01
In order to maximize the utility of simulation models for decision making, accurate estimation of growth parameters and associated variances is crucial. A mixed Gompertz growth model was used to account for between-bird variation and heterogeneous variance. The mixed model had several advantages over the fixed effects model. The mixed model partitioned BW variation into between- and within-bird variation, and the covariance structure assumed with the random effect accounted for part of the BW correlation across ages in the same individual. The amount of residual variance decreased by over 55% with the mixed model. The mixed model reduced estimation biases that resulted from selective sampling. For analysis of longitudinal growth data, the mixed effects growth model is recommended.
Area-to-point parameter estimation with geographically weighted regression
Murakami, Daisuke; Tsutsumi, Morito
2015-07-01
The modifiable areal unit problem (MAUP) is a problem by which aggregated units of data influence the results of spatial data analysis. Standard GWR, which ignores aggregation mechanisms, cannot be considered to serve as an efficient countermeasure of MAUP. Accordingly, this study proposes a type of GWR with aggregation mechanisms, termed area-to-point (ATP) GWR herein. ATP GWR, which is closely related to geostatistical approaches, estimates the disaggregate-level local trend parameters by using aggregated variables. We examine the effectiveness of ATP GWR for mitigating MAUP through a simulation study and an empirical study. The simulation study indicates that the method proposed herein is robust to the MAUP when the spatial scales of aggregation are not too global compared with the scale of the underlying spatial variations. The empirical studies demonstrate that the method provides intuitively consistent estimates.
Optimization-based particle filter for state and parameter estimation
Li Fu; Qi Fei; Shi Guangming; Zhang Li
2009-01-01
In recent years, the theory of particle filter has been developed and widely used for state and parameter estimation in nonlinear/non-Gaussian systems. Choosing good importance density is a critical issue in particle filter design. In order to improve the approximation of posterior distribution, this paper provides an optimization-based algorithm (the steepest descent method) to generate the proposal distribution and then sample particles from the distribution. This algorithm is applied in 1-D case, and the simulation results show that the proposed particle filter performs better than the extended Kalman filter (EKF), the standard particle filter (PF), the extended Kalman particle filter (PF-EKF) and the unscented particle filter (UPF) both in efficiency and in estimation precision.
Wang, Gang; Briskot, Till; Hahn, Tobias; Baumann, Pascal; Hubbuch, Jürgen
2017-03-03
Mechanistic modeling has been repeatedly successfully applied in process development and control of protein chromatography. For each combination of adsorbate and adsorbent, the mechanistic models have to be calibrated. Some of the model parameters, such as system characteristics, can be determined reliably by applying well-established experimental methods, whereas others cannot be measured directly. In common practice of protein chromatography modeling, these parameters are identified by applying time-consuming methods such as frontal analysis combined with gradient experiments, curve-fitting, or combined Yamamoto approach. For new components in the chromatographic system, these traditional calibration approaches require to be conducted repeatedly. In the presented work, a novel method for the calibration of mechanistic models based on artificial neural network (ANN) modeling was applied. An in silico screening of possible model parameter combinations was performed to generate learning material for the ANN model. Once the ANN model was trained to recognize chromatograms and to respond with the corresponding model parameter set, it was used to calibrate the mechanistic model from measured chromatograms. The ANN model's capability of parameter estimation was tested by predicting gradient elution chromatograms. The time-consuming model parameter estimation process itself could be reduced down to milliseconds. The functionality of the method was successfully demonstrated in a study with the calibration of the transport-dispersive model (TDM) and the stoichiometric displacement model (SDM) for a protein mixture.
Lunar ~3He estimations and related parameters analyses
无
2010-01-01
As a potential nuclear fuel, 3He element is significant for both the solution of impending human energy crisis and the conservation of natural environment. Lunar regolith contains abundant and easily extracted 3He. Based on the analyses of the impact factors of 3He abundance, here we have compared a few key assessment parameters and approaches used in lunar regolith 3He reserve estimation and some representative estimation results, and discussed the issues concerned in 3He abundance variation and 3He reserve estimation. Our studies suggest that in a range of at least meters deep, 3He abundance in lunar regolith is homogeneously distributed and generally does not depend on the depth; lunar regolith has long been in a saturation state of 3He trapped by minerals through chemical bonds, and the temperature fluctuation on the lunar surface exerts little influence on the lattice 3He abundance. In terms of above conclusions and the newest lunar regolith depth data from the microwave brightness temperature retrieval of the "ChangE-1" Lunar Microwave Sounder, a new 3He reserve estimation has been presented.
Behmanesh, Iman; Moaveni, Babak
2016-07-01
This paper presents a Hierarchical Bayesian model updating framework to account for the effects of ambient temperature and excitation amplitude. The proposed approach is applied for model calibration, response prediction and damage identification of a footbridge under changing environmental/ambient conditions. The concrete Young's modulus of the footbridge deck is the considered updating structural parameter with its mean and variance modeled as functions of temperature and excitation amplitude. The identified modal parameters over 27 months of continuous monitoring of the footbridge are used to calibrate the updating parameters. One of the objectives of this study is to show that by increasing the levels of information in the updating process, the posterior variation of the updating structural parameter (concrete Young's modulus) is reduced. To this end, the calibration is performed at three information levels using (1) the identified modal parameters, (2) modal parameters and ambient temperatures, and (3) modal parameters, ambient temperatures, and excitation amplitudes. The calibrated model is then validated by comparing the model-predicted natural frequencies and those identified from measured data after deliberate change to the structural mass. It is shown that accounting for modeling error uncertainties is crucial for reliable response prediction, and accounting only the estimated variability of the updating structural parameter is not sufficient for accurate response predictions. Finally, the calibrated model is used for damage identification of the footbridge.
Reliability estimation for 18Ni steel under low cycle fatigue using probabilistic technique
Lee, Ouk Sub; Choi, Hye Bin; Kim, Dong Hyeok; Kim, Hong Min [Inha Univ., Incheon (Korea, Republic of)
2008-07-01
In this study, the fatigue life of 18Ni Maraging steel under both low and high cyclic conditions is estimated by using FORM (First Order Reliability Method). Fatigue models based on strain approach such as coffin? Manson Fatigue theory and Morrow mean stress method are utilized. The limit state function including these two models was established. A case study for a material with the given special material properties was carried out to show the application of the proposed process of the reliability estimation. The effect of mean stress of the varying fatigue loading on the failure probability has also been investigated.
Estimation of Reliability and Cost Relationship for Architecture-based Software
Hui Guan; Wei-Ru Chen; Ning Huang; Hong-Ji Yang
2010-01-01
In this paper, we propose a new method to estimate the relationship between software reliability and software development cost taking into account the complexity for developing the software system and the size of software intended to develop during the implementation phase of the software development life cycle. On the basis of estimated relationship, a set of empirical data has been used to validate the correctness of the proposed model by comparing the result with the other existing models. The outcome of this work shows that the method proposed here is a relatively straightforward one in formulating the relationship between reliability and cost during implementation phase.
ZHANG Hua; WANG Yun-jia; LI Yong-feng
2009-01-01
A new mathematical model to estimate the parameters of the probability-integral method for mining subsidence prediction is proposed. Based on least squares support vector machine (LS-SVM) theory, it is capable of improving the precision and reliability of mining subsidence prediction. Many of the geological and mining factors involved are related in a nonlinear way. The new model is based on statistical theory (SLT) and empirical risk minimization (ERM) principles. Typical data collected from observation stations were used for the learning and training samples. The calculated results from the LS-SVM model were compared with the prediction results of a back propagation neural network (BPNN) model. The results show that the parameters were more precisely predicted by the LS-SVM model than by the BPNN model. The LS-SVM model was faster in computation and had better generalized performance. It provides a highly effective method for calculating the predicting parameters of the probability-integral method.
Jun Wang
2016-01-01
Full Text Available This paper proposes an improved cuckoo search (ICS algorithm to establish the parameters of chaotic systems. In order to improve the optimization capability of the basic cuckoo search (CS algorithm, the orthogonal design and simulated annealing operation are incorporated in the CS algorithm to enhance the exploitation search ability. Then the proposed algorithm is used to establish parameters of the Lorenz chaotic system and Chen chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the algorithm can estimate parameters with high accuracy and reliability. Finally, the results are compared with the CS algorithm, genetic algorithm, and particle swarm optimization algorithm, and the compared results demonstrate the method is energy-efficient and superior.
Wang, Jun; Zhou, Bihua; Zhou, Shudao
2016-01-01
This paper proposes an improved cuckoo search (ICS) algorithm to establish the parameters of chaotic systems. In order to improve the optimization capability of the basic cuckoo search (CS) algorithm, the orthogonal design and simulated annealing operation are incorporated in the CS algorithm to enhance the exploitation search ability. Then the proposed algorithm is used to establish parameters of the Lorenz chaotic system and Chen chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the algorithm can estimate parameters with high accuracy and reliability. Finally, the results are compared with the CS algorithm, genetic algorithm, and particle swarm optimization algorithm, and the compared results demonstrate the method is energy-efficient and superior.
Parameter estimation and uncertainty for gravitational waves from binary black holes
Berry, Christopher; LIGO Scientific Collaboration; Virgo Collaboration
2016-03-01
Binary black holes are one of the most promising sources of gravitational waves that could be observed by Advanced LIGO. To accurately infer the parameters of an astrophysical signal, it is necessary to have a reliable model of the gravitational waveform. Uncertainty in the waveform leads to uncertainty in the measured parameters. For loud signals, this theoretical uncertainty could dominate statistical uncertainty, to be the primary source of error in gravitational-wave astronomy. However, we expect the first candidate events will be closer to the detection threshold. We look at how parameter estimation would be influenced by the use of different waveform models for a binary black-hole signal near detection threshold, and how this can be folded in to a Bayesian analysis.
Luo, Yangjun; Wu, Xiaoxiang; Zhou, Mingdong
2015-01-01
on a probability-interval mixed reliability model, the imprecision of design parameters is modeled as interval uncertainties fluctuating within allowable tolerance bounds. The optimization model is defined as to minimize the total manufacturing cost under mixed reliability index constraints, which are further...... transformed into their equivalent formulations by using the performance measure approach. The optimization problem is then solved with the sequential approximate programming. Meanwhile, a numerically stable algorithm based on the trust region method is proposed to efficiently update the target performance......Both structural sizes and dimensional tolerances strongly influence the manufacturing cost and the functional performance of a practical product. This paper presents an optimization method to simultaneously find the optimal combination of structural sizes and dimensional tolerances. Based...
Iterative optimization algorithm with parameter estimation for the ambulance location problem.
Kim, Sun Hoon; Lee, Young Hoon
2016-12-01
The emergency vehicle location problem to determine the number of ambulance vehicles and their locations satisfying a required reliability level is investigated in this study. This is a complex nonlinear issue involving critical decision making that has inherent stochastic characteristics. This paper studies an iterative optimization algorithm with parameter estimation to solve the emergency vehicle location problem. In the suggested algorithm, a linear model determines the locations of ambulances, while a hypercube simulation is used to estimate and provide parameters regarding ambulance locations. First, we suggest an iterative hypercube optimization algorithm in which interaction parameters and rules for the hypercube and optimization are identified. The interaction rules employed in this study enable our algorithm to always find the locations of ambulances satisfying the reliability requirement. We also propose an iterative simulation optimization algorithm in which the hypercube method is replaced by a simulation, to achieve computational efficiency. The computational experiments show that the iterative simulation optimization algorithm performs equivalently to the iterative hypercube optimization. The suggested algorithms are found to outperform existing algorithms suggested in the literature.
Estimated Value of Service Reliability for Electric Utility Customers in the United States
Sullivan, M.J.; Mercurio, Matthew; Schellenberg, Josh
2009-06-01
Information on the value of reliable electricity service can be used to assess the economic efficiency of investments in generation, transmission and distribution systems, to strategically target investments to customer segments that receive the most benefit from system improvements, and to numerically quantify the risk associated with different operating, planning and investment strategies. This paper summarizes research designed to provide estimates of the value of service reliability for electricity customers in the US. These estimates were obtained by analyzing the results from 28 customer value of service reliability studies conducted by 10 major US electric utilities over the 16 year period from 1989 to 2005. Because these studies used nearly identical interruption cost estimation or willingness-to-pay/accept methods it was possible to integrate their results into a single meta-database describing the value of electric service reliability observed in all of them. Once the datasets from the various studies were combined, a two-part regression model was used to estimate customer damage functions that can be generally applied to calculate customer interruption costs per event by season, time of day, day of week, and geographical regions within the US for industrial, commercial, and residential customers. Estimated interruption costs for different types of customers and of different duration are provided. Finally, additional research and development designed to expand the usefulness of this powerful database and analysis are suggested.
Sensitivity Analysis on the Reliability of an Offshore Winch Regarding Selected Gearbox Parameters
Lothar Wöll
2017-04-01
Full Text Available To match the high expectations and demands of customers for long-lasting machines, the development of reliable products is crucial. Furthermore, for reasons of competitiveness, it is necessary to know the future product lifetime as accurately as possible to avoid over-dimensioning. Additionally, a more detailed system understanding enables the designer to influence the life expectancy of the product without performing an extensive amount of expensive and time-consuming tests. In early development stages of new equipment only very basic information about the future system design, like the ratio or the system structure, is available. Nevertheless, a reliable lifetime prediction of the system components and subsequently of the system itself is necessary to evaluate possible design alternatives and to identify critical components beforehand. Lifetime predictions, however, require many parameters, which are often not known in these early stages. Therefore, this paper performs a sensitivity analysis on the drivetrain of an offshore winch with active heave compensation for two typical load cases. The influences of the parameters gear center distance and ambient temperature are investigated by varying the parameters within typical ranges and evaluating the quantitative effect on the lifetime.
Lin, Jen-Jen; Cheng, Jung-Yu; Huang, Li-Fei; Lin, Ying-Hsiu; Wan, Yung-Liang; Tsui, Po-Hsiang
2017-02-09
The Nakagami distribution is an approximation useful to the statistics of ultrasound backscattered signals for tissue characterization. Various estimators may affect the Nakagami parameter in the detection of changes in backscattered statistics. In particular, the moment-based estimator (MBE) and maximum likelihood estimator (MLE) are two primary methods used to estimate the Nakagami parameters of ultrasound signals. This study explored the effects of the MBE and different MLE approximations on Nakagami parameter estimations. Ultrasound backscattered signals of different scatterer number densities were generated using a simulation model, and phantom experiments and measurements of human liver tissues were also conducted to acquire real backscattered echoes. Envelope signals were employed to estimate the Nakagami parameters by using the MBE, first- and second-order approximations of MLE (MLE1 and MLE2, respectively), and Greenwood approximation (MLEgw) for comparisons. The simulation results demonstrated that, compared with the MBE and MLE1, the MLE2 and MLEgw enabled more stable parameter estimations with small sample sizes. Notably, the required data length of the envelope signal was 3.6 times the pulse length. The phantom and tissue measurement results also showed that the Nakagami parameters estimated using the MLE2 and MLEgw could simultaneously differentiate various scatterer concentrations with lower standard deviations and reliably reflect physical meanings associated with the backscattered statistics. Therefore, the MLE2 and MLEgw are suggested as estimators for the development of Nakagami-based methodologies for ultrasound tissue characterization.
Sommer, Helle Mølgaard; Holst, Helle; Spliid, Henrik
1995-01-01
Three identical microbiological experiments were carried out and analysed in order to examine the variability of the parameter estimates. The microbiological system consisted of a substrate (toluene) and a biomass (pure culture) mixed together in an aquifer medium. The degradation of the substrate...
Parameter estimation and hypothesis testing in linear models
Koch, Karl-Rudolf
1999-01-01
The necessity to publish the second edition of this book arose when its third German edition had just been published. This second English edition is there fore a translation of the third German edition of Parameter Estimation and Hypothesis Testing in Linear Models, published in 1997. It differs from the first English edition by the addition of a new chapter on robust estimation of parameters and the deletion of the section on discriminant analysis, which has been more completely dealt with by the author in the book Bayesian In ference with Geodetic Applications, Springer-Verlag, Berlin Heidelberg New York, 1990. Smaller additions and deletions have been incorporated, to im prove the text, to point out new developments or to eliminate errors which became apparent. A few examples have been also added. I thank Springer-Verlag for publishing this second edition and for the assistance in checking the translation, although the responsibility of errors remains with the author. I also want to express my thanks...
A robust methodology for modal parameters estimation applied to SHM
Cardoso, Rharã; Cury, Alexandre; Barbosa, Flávio
2017-10-01
The subject of structural health monitoring is drawing more and more attention over the last years. Many vibration-based techniques aiming at detecting small structural changes or even damage have been developed or enhanced through successive researches. Lately, several studies have focused on the use of raw dynamic data to assess information about structural condition. Despite this trend and much skepticism, many methods still rely on the use of modal parameters as fundamental data for damage detection. Therefore, it is of utmost importance that modal identification procedures are performed with a sufficient level of precision and automation. To fulfill these requirements, this paper presents a novel automated time-domain methodology to identify modal parameters based on a two-step clustering analysis. The first step consists in clustering modes estimates from parametric models of different orders, usually presented in stabilization diagrams. In an automated manner, the first clustering analysis indicates which estimates correspond to physical modes. To circumvent the detection of spurious modes or the loss of physical ones, a second clustering step is then performed. The second step consists in the data mining of information gathered from the first step. To attest the robustness and efficiency of the proposed methodology, numerically generated signals as well as experimental data obtained from a simply supported beam tested in laboratory and from a railway bridge are utilized. The results appeared to be more robust and accurate comparing to those obtained from methods based on one-step clustering analysis.
Linear Estimation of Location and Scale Parameters Using Partial Maxima
Papadatos, Nickos
2010-01-01
Consider an i.i.d. sample X^*_1,X^*_2,...,X^*_n from a location-scale family, and assume that the only available observations consist of the partial maxima (or minima)sequence, X^*_{1:1},X^*_{2:2},...,X^*_{n:n}, where X^*_{j:j}=max{X^*_1,...,X^*_j}. This kind of truncation appears in several circumstances, including best performances in athletics events. In the case of partial maxima, the form of the BLUEs (best linear unbiased estimators) is quite similar to the form of the well-known Lloyd's (1952, Least-squares estimation of location and scale parameters using order statistics, Biometrika, vol. 39, pp. 88-95) BLUEs, based on (the sufficient sample of) order statistics, but, in contrast to the classical case, their consistency is no longer obvious. The present paper is mainly concerned with the scale parameter, showing that the variance of the partial maxima BLUE is at most of order O(1/log n), for a wide class of distributions.
Estimation of Secondary Meteorological Parameters Using Mining Data Techniques
Rosabel Zerquera Díaz
2010-10-01
Full Text Available This work develops a process of Knowledge Discovery in Databases (KDD at the Higher Polytechnic Institute José Antonio Echeverría for the group of Environmental Research in collaboration with the Center of Information Management and Energy Development (CUBAENERGÍA in order to obtain a data model to estimate the behavior of secondary weather parameters from surface data. It describes some aspects of Data Mining and its application in the meteorological environment, also selects and describes the CRISP-DM methodology and data analysis tool WEKA. Tasks used: attribute selection and regression, technique: neural network of multilayer perceptron type and algorithms: CfsSubsetEval, BestFirst and MultilayerPerceptron. Estimation models are obtained for secondary meteorological parameters: height of convective mixed layer, height of mechanical mixed layer and convective velocity scale, necessary for the study of patterns of dispersion of pollutants in Cujae's area. The results set a precedent for future research and for the continuity of this in its first stage.
Parameter estimation in space systems using recurrent neural networks
Parlos, Alexander G.; Atiya, Amir F.; Sunkel, John W.
1991-01-01
The identification of time-varying parameters encountered in space systems is addressed, using artificial neural systems. A hybrid feedforward/feedback neural network, namely a recurrent multilayer perception, is used as the model structure in the nonlinear system identification. The feedforward portion of the network architecture provides its well-known interpolation property, while through recurrency and cross-talk, the local information feedback enables representation of temporal variations in the system nonlinearities. The standard back-propagation-learning algorithm is modified and it is used for both the off-line and on-line supervised training of the proposed hybrid network. The performance of recurrent multilayer perceptron networks in identifying parameters of nonlinear dynamic systems is investigated by estimating the mass properties of a representative large spacecraft. The changes in the spacecraft inertia are predicted using a trained neural network, during two configurations corresponding to the early and late stages of the spacecraft on-orbit assembly sequence. The proposed on-line mass properties estimation capability offers encouraging results, though, further research is warranted for training and testing the predictive capabilities of these networks beyond nominal spacecraft operations.
Periodic orbits of hybrid systems and parameter estimation via AD.
Guckenheimer, John. (Cornell University); Phipps, Eric Todd; Casey, Richard (INRIA Sophia-Antipolis)
2004-07-01
Rhythmic, periodic processes are ubiquitous in biological systems; for example, the heart beat, walking, circadian rhythms and the menstrual cycle. Modeling these processes with high fidelity as periodic orbits of dynamical systems is challenging because: (1) (most) nonlinear differential equations can only be solved numerically; (2) accurate computation requires solving boundary value problems; (3) many problems and solutions are only piecewise smooth; (4) many problems require solving differential-algebraic equations; (5) sensitivity information for parameter dependence of solutions requires solving variational equations; and (6) truncation errors in numerical integration degrade performance of optimization methods for parameter estimation. In addition, mathematical models of biological processes frequently contain many poorly-known parameters, and the problems associated with this impedes the construction of detailed, high-fidelity models. Modelers are often faced with the difficult problem of using simulations of a nonlinear model, with complex dynamics and many parameters, to match experimental data. Improved computational tools for exploring parameter space and fitting models to data are clearly needed. This paper describes techniques for computing periodic orbits in systems of hybrid differential-algebraic equations and parameter estimation methods for fitting these orbits to data. These techniques make extensive use of automatic differentiation to accurately and efficiently evaluate derivatives for time integration, parameter sensitivities, root finding and optimization. The boundary value problem representing a periodic orbit in a hybrid system of differential algebraic equations is discretized via multiple-shooting using a high-degree Taylor series integration method [GM00, Phi03]. Numerical solutions to the shooting equations are then estimated by a Newton process yielding an approximate periodic orbit. A metric is defined for computing the distance
Distributed parameter estimation in wireless sensor networks using fused local observations
Fanaei, Mohammad; Valenti, Matthew C.; Schmid, Natalia A.; Alkhweldi, Marwan M.
2012-05-01
The goal of this paper is to reliably estimate a vector of unknown deterministic parameters associated with an underlying function at a fusion center of a wireless sensor network based on its noisy samples made at distributed local sensors. A set of noisy samples of a deterministic function characterized by a nite set of unknown param- eters to be estimated is observed by distributed sensors. The parameters to be estimated can be some attributes associated with the underlying function, such as its height, its center, its variances in dierent directions, or even the weights of its specic components over a predened basis set. Each local sensor processes its observation and sends its processed sample to a fusion center through parallel impaired communication channels. Two local processing schemes, namely analog and digital, are considered. In the analog local processing scheme, each sensor transmits an amplied version of its local analog noisy observation to the fusion center, acting like a relay in a wireless network. In the digital local processing scheme, each sensor quantizes its noisy observation before trans- mitting it to the fusion center. A at-fading channel model is considered between the local sensors and fusion center. The fusion center combines all of the received locally-processed observations and estimates the vector of unknown parameters of the underlying function. Two dierent well-known estimation techniques, namely maximum-likelihood (ML), for both analog and digital local processing schemes, and expectation maximization (EM), for digital local processing scheme, are considered at the fusion center. The performance of the proposed distributed parameter estimation system is investigated through simulation of practical scenarios for a sample underlying function.
Shenggang WANG; Hanlong LI; Chiwei LUNG
2003-01-01
The effects of accuracy of measured fractal dimension D and roughness exponent H are investigated in this paperwith a view to examine the reliability of D and H as materials dimensionless parameters of fracture surfaces. Dand H are different from general physical quantity, because they are dimensionless quantities and often appear asexponents in a theoretical function or formula. In many cases, the error of the physical quantity related to D or Hmay far exceed 10%, if D or H has error around 10%. The required accuracy of fractal dimension and roughnessexponent should be higher, but it depends on the specific material, the associated physical quantity and the scale ofmeasurement.
Parameter and uncertainty estimation for mechanistic, spatially explicit epidemiological models
Finger, Flavio; Schaefli, Bettina; Bertuzzo, Enrico; Mari, Lorenzo; Rinaldo, Andrea
2014-05-01
Epidemiological models can be a crucially important tool for decision-making during disease outbreaks. The range of possible applications spans from real-time forecasting and allocation of health-care resources to testing alternative intervention mechanisms such as vaccines, antibiotics or the improvement of sanitary conditions. Our spatially explicit, mechanistic models for cholera epidemics have been successfully applied to several epidemics including, the one that struck Haiti in late 2010 and is still ongoing. Calibration and parameter estimation of such models represents a major challenge because of properties unusual in traditional geoscientific domains such as hydrology. Firstly, the epidemiological data available might be subject to high uncertainties due to error-prone diagnosis as well as manual (and possibly incomplete) data collection. Secondly, long-term time-series of epidemiological data are often unavailable. Finally, the spatially explicit character of the models requires the comparison of several time-series of model outputs with their real-world counterparts, which calls for an appropriate weighting scheme. It follows that the usual assumption of a homoscedastic Gaussian error distribution, used in combination with classical calibration techniques based on Markov chain Monte Carlo algorithms, is likely to be violated, whereas the construction of an appropriate formal likelihood function seems close to impossible. Alternative calibration methods, which allow for accurate estimation of total model uncertainty, particularly regarding the envisaged use of the models for decision-making, are thus needed. Here we present the most recent developments regarding methods for parameter and uncertainty estimation to be used with our mechanistic, spatially explicit models for cholera epidemics, based on informal measures of goodness of fit.
Bayesian Approach in Estimation of Scale Parameter of Nakagami Distribution
Azam Zaka
2014-08-01
Full Text Available Normal 0 false false false EN-US X-NONE X-NONE Nakagami distribution is a flexible life time distribution that may offer a good fit to some failure data sets. It has applications in attenuation of wireless signals traversing multiple paths, deriving unit hydrographs in hydrology, medical imaging studies etc. In this research, we obtain Bayesian estimators of the scale parameter of Nakagami distribution. For the posterior distribution of this parameter, we consider Uniform, Inverse Exponential and Levy priors. The three loss functions taken up are Squared Error Loss function, Quadratic Loss Function and Precautionary Loss function. The performance of an estimator is assessed on the basis of its relative posterior risk. Monte Carlo Simulations are used to compare the performance of the estimators. It is discovered that the PLF produces the least posterior risk when uniform priors is used. SELF is the best when inverse exponential and Levy Priors are used. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}
Thermodynamic criteria for estimating the kinetic parameters of catalytic reactions
Mitrichev, I. I.; Zhensa, A. V.; Kol'tsova, E. M.
2017-01-01
Kinetic parameters are estimated using two criteria in addition to the traditional criterion that considers the consistency between experimental and modeled conversion data: thermodynamic consistency and the consistency with entropy production (i.e., the absolute rate of the change in entropy due to exchange with the environment is consistent with the rate of entropy production in the steady state). A special procedure is developed and executed on a computer to achieve the thermodynamic consistency of a set of kinetic parameters with respect to both the standard entropy of a reaction and the standard enthalpy of a reaction. A problem of multi-criterion optimization, reduced to a single-criterion problem by summing weighted values of the three criteria listed above, is solved. Using the reaction of NO reduction with CO on a platinum catalyst as an example, it is shown that the set of parameters proposed by D.B. Mantri and P. Aghalayam gives much worse agreement with experimental values than the set obtained on the basis of three criteria: the sum of the squares of deviations for conversion, the thermodynamic consistency, and the consistency with entropy production.
Parameter Estimation of Nonlinear Systems by Dynamic Cuckoo Search.
Liao, Qixiang; Zhou, Shudao; Shi, Hanqing; Shi, Weilai
2017-04-01
In order to address with the problem of the traditional or improved cuckoo search (CS) algorithm, we propose a dynamic adaptive cuckoo search with crossover operator (DACS-CO) algorithm. Normally, the parameters of the CS algorithm are kept constant or adapted by empirical equation that may result in decreasing the efficiency of the algorithm. In order to solve the problem, a feedback control scheme of algorithm parameters is adopted in cuckoo search; Rechenberg's 1/5 criterion, combined with a learning strategy, is used to evaluate the evolution process. In addition, there are no information exchanges between individuals for cuckoo search algorithm. To promote the search progress and overcome premature convergence, the multiple-point random crossover operator is merged into the CS algorithm to exchange information between individuals and improve the diversification and intensification of the population. The performance of the proposed hybrid algorithm is investigated through different nonlinear systems, with the numerical results demonstrating that the method can estimate parameters accurately and efficiently. Finally, we compare the results with the standard CS algorithm, orthogonal learning cuckoo search algorithm (OLCS), an adaptive and simulated annealing operation with the cuckoo search algorithm (ACS-SA), a genetic algorithm (GA), a particle swarm optimization algorithm (PSO), and a genetic simulated annealing algorithm (GA-SA). Our simulation results demonstrate the effectiveness and superior performance of the proposed algorithm.
Estimating negative binomial parameters from occurrence data with detection times.
Hwang, Wen-Han; Huggins, Richard; Stoklosa, Jakub
2016-11-01
The negative binomial distribution is a common model for the analysis of count data in biology and ecology. In many applications, we may not observe the complete frequency count in a quadrat but only that a species occurred in the quadrat. If only occurrence data are available then the two parameters of the negative binomial distribution, the aggregation index and the mean, are not identifiable. This can be overcome by data augmentation or through modeling the dependence between quadrat occupancies. Here, we propose to record the (first) detection time while collecting occurrence data in a quadrat. We show that under what we call proportionate sampling, where the time to survey a region is proportional to the area of the region, that both negative binomial parameters are estimable. When the mean parameter is larger than two, our proposed approach is more efficient than the data augmentation method developed by Solow and Smith (, Am. Nat. 176, 96-98), and in general is cheaper to conduct. We also investigate the effect of misidentification when collecting negative binomially distributed data, and conclude that, in general, the effect can be simply adjusted for provided that the mean and variance of misidentification probabilities are known. The results are demonstrated in a simulation study and illustrated in several real examples.
On-line estimation of concentration parameters in fermentation processes
XIONG Zhi-hua; HUANG Guo-hong; SHAO Hui-he
2005-01-01
It has long been thought that bioprocess, with their inherent measurement difficulties and complex dynamics, posed almost insurmountable problems to engineers. A novel software sensor is proposed to make more effective use of those measurements that are already available, which enable improvement in fermentation process control. The proposed method is based on mixtures of Gaussian processes (GP) with expectation maximization (EM) algorithm employed for parameter estimation of mixture of models. The mixture model can alleviate computational complexity of GP and also accord with changes of operating condition in fermentation processes, i.e., it would certainly be able to examine what types of process-knowledge would be most relevant for local models' specific operating points of the process and then combine them into a global one. Demonstrated by on-line estimate of yeast concentration in fermentation industry as an example, it is shown that soft sensor based state estimation is a powerful technique for both enhancing automatic control performance of biological systems and implementing on-line monitoring and optimization.
Donald D. Anderson
2012-01-01
Full Text Available Recent findings suggest that contact stress is a potent predictor of subsequent symptomatic osteoarthritis development in the knee. However, much larger numbers of knees (likely on the order of hundreds, if not thousands need to be reliably analyzed to achieve the statistical power necessary to clarify this relationship. This study assessed the reliability of new semiautomated computational methods for estimating contact stress in knees from large population-based cohorts. Ten knees of subjects from the Multicenter Osteoarthritis Study were included. Bone surfaces were manually segmented from sequential 1.0 Tesla magnetic resonance imaging slices by three individuals on two nonconsecutive days. Four individuals then registered the resulting bone surfaces to corresponding bone edges on weight-bearing radiographs, using a semi-automated algorithm. Discrete element analysis methods were used to estimate contact stress distributions for each knee. Segmentation and registration reliabilities (day-to-day and interrater for peak and mean medial and lateral tibiofemoral contact stress were assessed with Shrout-Fleiss intraclass correlation coefficients (ICCs. The segmentation and registration steps of the modeling approach were found to have excellent day-to-day (ICC 0.93–0.99 and good inter-rater reliability (0.84–0.97. This approach for estimating compartment-specific tibiofemoral contact stress appears to be sufficiently reliable for use in large population-based cohorts.