WorldWideScience

Sample records for multiple parameter estimation

  1. Multiple-hit parameter estimation in monolithic detectors.

    Science.gov (United States)

    Hunter, William C J; Barrett, Harrison H; Lewellen, Tom K; Miyaoka, Robert S

    2013-02-01

    We examine a maximum-a-posteriori method for estimating the primary interaction position of gamma rays with multiple interaction sites (hits) in a monolithic detector. In assessing the performance of a multiple-hit estimator over that of a conventional one-hit estimator, we consider a few different detector and readout configurations of a 50-mm-wide square cerium-doped lutetium oxyorthosilicate block. For this study, we use simulated data from SCOUT, a Monte-Carlo tool for photon tracking and modeling scintillation- camera output. With this tool, we determine estimate bias and variance for a multiple-hit estimator and compare these with similar metrics for a one-hit maximum-likelihood estimator, which assumes full energy deposition in one hit. We also examine the effect of event filtering on these metrics; for this purpose, we use a likelihood threshold to reject signals that are not likely to have been produced under the assumed likelihood model. Depending on detector design, we observe a 1%-12% improvement of intrinsic resolution for a 1-or-2-hit estimator as compared with a 1-hit estimator. We also observe improved differentiation of photopeak events using a 1-or-2-hit estimator as compared with the 1-hit estimator; more than 6% of photopeak events that were rejected by likelihood filtering for the 1-hit estimator were accurately identified as photopeak events and positioned without loss of resolution by a 1-or-2-hit estimator; for PET, this equates to at least a 12% improvement in coincidence-detection efficiency with likelihood filtering applied.

  2. Parameter Estimation

    DEFF Research Database (Denmark)

    Sales-Cruz, Mauricio; Heitzig, Martina; Cameron, Ian

    2011-01-01

    of optimisation techniques coupled with dynamic solution of the underlying model. Linear and nonlinear approaches to parameter estimation are investigated. There is also the application of maximum likelihood principles in the estimation of parameters, as well as the use of orthogonal collocation to generate a set......In this chapter the importance of parameter estimation in model development is illustrated through various applications related to reaction systems. In particular, rate constants in a reaction system are obtained through parameter estimation methods. These approaches often require the application...... of algebraic equations as the basis for parameter estimation.These approaches are illustrated using estimations of kinetic constants from reaction system models....

  3. A quasi-sequential parameter estimation for nonlinear dynamic systems based on multiple data profiles

    International Nuclear Information System (INIS)

    Zhao, Chao; Vu, Quoc Dong; Li, Pu

    2013-01-01

    A three-stage computation framework for solving parameter estimation problems for dynamic systems with multiple data profiles is developed. The dynamic parameter estimation problem is transformed into a nonlinear programming (NLP) problem by using collocation on finite elements. The model parameters to be estimated are treated in the upper stage by solving an NLP problem. The middle stage consists of multiple NLP problems nested in the upper stage, representing the data reconciliation step for each data profile. We use the quasi-sequential dynamic optimization approach to solve these problems. In the lower stage, the state variables and their gradients are evaluated through ntegrating the model equations. Since the second-order derivatives are not required in the computation framework this proposed method will be efficient for solving nonlinear dynamic parameter estimation problems. The computational results obtained on a parameter estimation problem for two CSTR models demonstrate the effectiveness of the proposed approach

  4. A quasi-sequential parameter estimation for nonlinear dynamic systems based on multiple data profiles

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Chao [FuZhou University, FuZhou (China); Vu, Quoc Dong; Li, Pu [Ilmenau University of Technology, Ilmenau (Germany)

    2013-02-15

    A three-stage computation framework for solving parameter estimation problems for dynamic systems with multiple data profiles is developed. The dynamic parameter estimation problem is transformed into a nonlinear programming (NLP) problem by using collocation on finite elements. The model parameters to be estimated are treated in the upper stage by solving an NLP problem. The middle stage consists of multiple NLP problems nested in the upper stage, representing the data reconciliation step for each data profile. We use the quasi-sequential dynamic optimization approach to solve these problems. In the lower stage, the state variables and their gradients are evaluated through ntegrating the model equations. Since the second-order derivatives are not required in the computation framework this proposed method will be efficient for solving nonlinear dynamic parameter estimation problems. The computational results obtained on a parameter estimation problem for two CSTR models demonstrate the effectiveness of the proposed approach.

  5. Parameter Estimation of Multiple Frequency-Hopping Signals with Two Sensors

    Directory of Open Access Journals (Sweden)

    Le Zuo

    2018-04-01

    Full Text Available This paper essentially focuses on parameter estimation of multiple wideband emitting sources with time-varying frequencies, such as two-dimensional (2-D direction of arrival (DOA and signal sorting, with a low-cost circular synthetic array (CSA consisting of only two rotating sensors. Our basic idea is to decompose the received data, which is a superimposition of phase measurements from multiple sources into separated groups and separately estimate the DOA associated with each source. Motivated by joint parameter estimation, we propose to adopt the expectation maximization (EM algorithm in this paper; our method involves two steps, namely, the expectation-step (E-step and the maximization (M-step. In the E-step, the correspondence of each signal with its emitting source is found. Then, in the M-step, the maximum-likelihood (ML estimates of the DOA parameters are obtained. These two steps are iteratively and alternatively executed to jointly determine the DOAs and sort multiple signals. Closed-form DOA estimation formulae are developed by ML estimation based on phase data, which also realize an optimal estimation. Directional ambiguity is also addressed by another ML estimation method based on received complex responses. The Cramer-Rao lower bound is derived for understanding the estimation accuracy and performance comparison. The verification of the proposed method is demonstrated with simulations.

  6. Parameter estimation of multivariate multiple regression model using bayesian with non-informative Jeffreys’ prior distribution

    Science.gov (United States)

    Saputro, D. R. S.; Amalia, F.; Widyaningsih, P.; Affan, R. C.

    2018-05-01

    Bayesian method is a method that can be used to estimate the parameters of multivariate multiple regression model. Bayesian method has two distributions, there are prior and posterior distributions. Posterior distribution is influenced by the selection of prior distribution. Jeffreys’ prior distribution is a kind of Non-informative prior distribution. This prior is used when the information about parameter not available. Non-informative Jeffreys’ prior distribution is combined with the sample information resulting the posterior distribution. Posterior distribution is used to estimate the parameter. The purposes of this research is to estimate the parameters of multivariate regression model using Bayesian method with Non-informative Jeffreys’ prior distribution. Based on the results and discussion, parameter estimation of β and Σ which were obtained from expected value of random variable of marginal posterior distribution function. The marginal posterior distributions for β and Σ are multivariate normal and inverse Wishart. However, in calculation of the expected value involving integral of a function which difficult to determine the value. Therefore, approach is needed by generating of random samples according to the posterior distribution characteristics of each parameter using Markov chain Monte Carlo (MCMC) Gibbs sampling algorithm.

  7. The Multiple-Choice Model: Some Solutions for Estimation of Parameters in the Presence of Omitted Responses

    Science.gov (United States)

    Abad, Francisco J.; Olea, Julio; Ponsoda, Vicente

    2009-01-01

    This article deals with some of the problems that have hindered the application of Samejima's and Thissen and Steinberg's multiple-choice models: (a) parameter estimation difficulties owing to the large number of parameters involved, (b) parameter identifiability problems in the Thissen and Steinberg model, and (c) their treatment of omitted…

  8. Estimate the contribution of incubation parameters influence egg hatchability using multiple linear regression analysis.

    Science.gov (United States)

    Khalil, Mohamed H; Shebl, Mostafa K; Kosba, Mohamed A; El-Sabrout, Karim; Zaki, Nesma

    2016-08-01

    This research was conducted to determine the most affecting parameters on hatchability of indigenous and improved local chickens' eggs. Five parameters were studied (fertility, early and late embryonic mortalities, shape index, egg weight, and egg weight loss) on four strains, namely Fayoumi, Alexandria, Matrouh, and Montazah. Multiple linear regression was performed on the studied parameters to determine the most influencing one on hatchability. The results showed significant differences in commercial and scientific hatchability among strains. Alexandria strain has the highest significant commercial hatchability (80.70%). Regarding the studied strains, highly significant differences in hatching chick weight among strains were observed. Using multiple linear regression analysis, fertility made the greatest percent contribution (71.31%) to hatchability, and the lowest percent contributions were made by shape index and egg weight loss. A prediction of hatchability using multiple regression analysis could be a good tool to improve hatchability percentage in chickens.

  9. A frequency domain global parameter estimation method for multiple reference frequency response measurements

    Science.gov (United States)

    Shih, C. Y.; Tsuei, Y. G.; Allemang, R. J.; Brown, D. L.

    1988-10-01

    A method of using the matrix Auto-Regressive Moving Average (ARMA) model in the Laplace domain for multiple-reference global parameter identification is presented. This method is particularly applicable to the area of modal analysis where high modal density exists. The method is also applicable when multiple reference frequency response functions are used to characterise linear systems. In order to facilitate the mathematical solution, the Forsythe orthogonal polynomial is used to reduce the ill-conditioning of the formulated equations and to decouple the normal matrix into two reduced matrix blocks. A Complex Mode Indicator Function (CMIF) is introduced, which can be used to determine the proper order of the rational polynomials.

  10. Equilibrium expert: an add-in to Microsoft Excel for multiple binding equilibrium simulations and parameter estimations.

    Science.gov (United States)

    Raguin, Olivier; Gruaz-Guyon, Anne; Barbet, Jacques

    2002-11-01

    An add-in to Microsoft Excel was developed to simulate multiple binding equilibriums. A partition function, readily written even when the equilibrium is complex, describes the experimental system. It involves the concentrations of the different free molecular species and of the different complexes present in the experiment. As a result, the software is not restricted to a series of predefined experimental setups but can handle a large variety of problems involving up to nine independent molecular species. Binding parameters are estimated by nonlinear least-square fitting of experimental measurements as supplied by the user. The fitting process allows user-defined weighting of the experimental data. The flexibility of the software and the way it may be used to describe common experimental situations and to deal with usual problems such as tracer reactivity or nonspecific binding is demonstrated by a few examples. The software is available free of charge upon request.

  11. Optomechanical parameter estimation

    International Nuclear Information System (INIS)

    Ang, Shan Zheng; Tsang, Mankei; Harris, Glen I; Bowen, Warwick P

    2013-01-01

    We propose a statistical framework for the problem of parameter estimation from a noisy optomechanical system. The Cramér–Rao lower bound on the estimation errors in the long-time limit is derived and compared with the errors of radiometer and expectation–maximization (EM) algorithms in the estimation of the force noise power. When applied to experimental data, the EM estimator is found to have the lowest error and follow the Cramér–Rao bound most closely. Our analytic results are envisioned to be valuable to optomechanical experiment design, while the EM algorithm, with its ability to estimate most of the system parameters, is envisioned to be useful for optomechanical sensing, atomic magnetometry and fundamental tests of quantum mechanics. (paper)

  12. Ranking as parameter estimation

    Czech Academy of Sciences Publication Activity Database

    Kárný, Miroslav; Guy, Tatiana Valentine

    2009-01-01

    Roč. 4, č. 2 (2009), s. 142-158 ISSN 1745-7645 R&D Projects: GA MŠk 2C06001; GA AV ČR 1ET100750401; GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : ranking * Bayesian estimation * negotiation * modelling Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2009/AS/karny- ranking as parameter estimation.pdf

  13. Multiple-Parameter Estimation Method Based on Spatio-Temporal 2-D Processing for Bistatic MIMO Radar

    Directory of Open Access Journals (Sweden)

    Shouguo Yang

    2015-12-01

    Full Text Available A novel spatio-temporal 2-dimensional (2-D processing method that can jointly estimate the transmitting-receiving azimuth and Doppler frequency for bistatic multiple-input multiple-output (MIMO radar in the presence of spatial colored noise and an unknown number of targets is proposed. In the temporal domain, the cross-correlation of the matched filters’ outputs for different time-delay sampling is used to eliminate the spatial colored noise. In the spatial domain, the proposed method uses a diagonal loading method and subspace theory to estimate the direction of departure (DOD and direction of arrival (DOA, and the Doppler frequency can then be accurately estimated through the estimation of the DOD and DOA. By skipping target number estimation and the eigenvalue decomposition (EVD of the data covariance matrix estimation and only requiring a one-dimensional search, the proposed method achieves low computational complexity. Furthermore, the proposed method is suitable for bistatic MIMO radar with an arbitrary transmitted and received geometrical configuration. The correction and efficiency of the proposed method are verified by computer simulation results.

  14. Improved Estimates of Thermodynamic Parameters

    Science.gov (United States)

    Lawson, D. D.

    1982-01-01

    Techniques refined for estimating heat of vaporization and other parameters from molecular structure. Using parabolic equation with three adjustable parameters, heat of vaporization can be used to estimate boiling point, and vice versa. Boiling points and vapor pressures for some nonpolar liquids were estimated by improved method and compared with previously reported values. Technique for estimating thermodynamic parameters should make it easier for engineers to choose among candidate heat-exchange fluids for thermochemical cycles.

  15. A comparison on parameter-estimation methods in multiple regression analysis with existence of multicollinearity among independent variables

    Directory of Open Access Journals (Sweden)

    Hukharnsusatrue, A.

    2005-11-01

    Full Text Available The objective of this research is to compare multiple regression coefficients estimating methods with existence of multicollinearity among independent variables. The estimation methods are Ordinary Least Squares method (OLS, Restricted Least Squares method (RLS, Restricted Ridge Regression method (RRR and Restricted Liu method (RL when restrictions are true and restrictions are not true. The study used the Monte Carlo Simulation method. The experiment was repeated 1,000 times under each situation. The analyzed results of the data are demonstrated as follows. CASE 1: The restrictions are true. In all cases, RRR and RL methods have a smaller Average Mean Square Error (AMSE than OLS and RLS method, respectively. RRR method provides the smallest AMSE when the level of correlations is high and also provides the smallest AMSE for all level of correlations and all sample sizes when standard deviation is equal to 5. However, RL method provides the smallest AMSE when the level of correlations is low and middle, except in the case of standard deviation equal to 3, small sample sizes, RRR method provides the smallest AMSE.The AMSE varies with, most to least, respectively, level of correlations, standard deviation and number of independent variables but inversely with to sample size.CASE 2: The restrictions are not true.In all cases, RRR method provides the smallest AMSE, except in the case of standard deviation equal to 1 and error of restrictions equal to 5%, OLS method provides the smallest AMSE when the level of correlations is low or median and there is a large sample size, but the small sample sizes, RL method provides the smallest AMSE. In addition, when error of restrictions is increased, OLS method provides the smallest AMSE for all level, of correlations and all sample sizes, except when the level of correlations is high and sample sizes small. Moreover, the case OLS method provides the smallest AMSE, the most RLS method has a smaller AMSE than

  16. Estimating Risk Parameters

    OpenAIRE

    Aswath Damodaran

    1999-01-01

    Over the last three decades, the capital asset pricing model has occupied a central and often controversial place in most corporate finance analysts’ tool chests. The model requires three inputs to compute expected returns – a riskfree rate, a beta for an asset and an expected risk premium for the market portfolio (over and above the riskfree rate). Betas are estimated, by most practitioners, by regressing returns on an asset against a stock index, with the slope of the regression being the b...

  17. Multiple-trait estimates of genetic parameters for metabolic disease traits, fertility disorders, and their predictors in Canadian Holsteins.

    Science.gov (United States)

    Jamrozik, J; Koeck, A; Kistemaker, G J; Miglior, F

    2016-03-01

    Producer-recorded health data for metabolic disease traits and fertility disorders on 35,575 Canadian Holstein cows were jointly analyzed with selected indicator traits. Metabolic diseases included clinical ketosis (KET) and displaced abomasum (DA); fertility disorders were metritis (MET) and retained placenta (RP); and disease indicators were fat-to-protein ratio, milk β-hydroxybutyrate, and body condition score (BCS) in the first lactation. Traits in first and later (up to fifth) lactations were treated as correlated in the multiple-trait (13 traits in total) animal linear model. Bayesian methods with Gibbs sampling were implemented for the analysis. Estimates of heritability for disease incidence were low, up to 0.06 for DA in first lactation. Among disease traits, the environmental herd-year variance constituted 4% of the total variance for KET and less for other traits. First- and later-lactation disease traits were genetically correlated (from 0.66 to 0.72) across all traits, indicating different genetic backgrounds for first and later lactations. Genetic correlations between KET and DA were relatively strong and positive (up to 0.79) in both first- and later-lactation cows. Genetic correlations between fertility disorders were slightly lower. Metritis was strongly genetically correlated with both metabolic disease traits in the first lactation only. All other genetic correlations between metabolic and fertility diseases were statistically nonsignificant. First-lactation KET and MET were strongly positively correlated with later-lactation performance for these traits due to the environmental herd-year effect. Indicator traits were moderately genetically correlated (from 0.30 to 0.63 in absolute values) with both metabolic disease traits in the first lactation. Smaller and mostly nonsignificant genetic correlations were among indicators and metabolic diseases in later lactations. The only significant genetic correlations between indicators and fertility

  18. Estimating Dbh of Trees Employing Multiple Linear Regression of the best Lidar-Derived Parameter Combination Automated in Python in a Natural Broadleaf Forest in the Philippines

    Science.gov (United States)

    Ibanez, C. A. G.; Carcellar, B. G., III; Paringit, E. C.; Argamosa, R. J. L.; Faelga, R. A. G.; Posilero, M. A. V.; Zaragosa, G. P.; Dimayacyac, N. A.

    2016-06-01

    Diameter-at-Breast-Height Estimation is a prerequisite in various allometric equations estimating important forestry indices like stem volume, basal area, biomass and carbon stock. LiDAR Technology has a means of directly obtaining different forest parameters, except DBH, from the behavior and characteristics of point cloud unique in different forest classes. Extensive tree inventory was done on a two-hectare established sample plot in Mt. Makiling, Laguna for a natural growth forest. Coordinates, height, and canopy cover were measured and types of species were identified to compare to LiDAR derivatives. Multiple linear regression was used to get LiDAR-derived DBH by integrating field-derived DBH and 27 LiDAR-derived parameters at 20m, 10m, and 5m grid resolutions. To know the best combination of parameters in DBH Estimation, all possible combinations of parameters were generated and automated using python scripts and additional regression related libraries such as Numpy, Scipy, and Scikit learn were used. The combination that yields the highest r-squared or coefficient of determination and lowest AIC (Akaike's Information Criterion) and BIC (Bayesian Information Criterion) was determined to be the best equation. The equation is at its best using 11 parameters at 10mgrid size and at of 0.604 r-squared, 154.04 AIC and 175.08 BIC. Combination of parameters may differ among forest classes for further studies. Additional statistical tests can be supplemented to help determine the correlation among parameters such as Kaiser- Meyer-Olkin (KMO) Coefficient and the Barlett's Test for Spherecity (BTS).

  19. Parameter estimation in plasmonic QED

    Science.gov (United States)

    Jahromi, H. Rangani

    2018-03-01

    We address the problem of parameter estimation in the presence of plasmonic modes manipulating emitted light via the localized surface plasmons in a plasmonic waveguide at the nanoscale. The emitter that we discuss is the nitrogen vacancy centre (NVC) in diamond modelled as a qubit. Our goal is to estimate the β factor measuring the fraction of emitted energy captured by waveguide surface plasmons. The best strategy to obtain the most accurate estimation of the parameter, in terms of the initial state of the probes and different control parameters, is investigated. In particular, for two-qubit estimation, it is found although we may achieve the best estimation at initial instants by using the maximally entangled initial states, at long times, the optimal estimation occurs when the initial state of the probes is a product one. We also find that decreasing the interqubit distance or increasing the propagation length of the plasmons improve the precision of the estimation. Moreover, decrease of spontaneous emission rate of the NVCs retards the quantum Fisher information (QFI) reduction and therefore the vanishing of the QFI, measuring the precision of the estimation, is delayed. In addition, if the phase parameter of the initial state of the two NVCs is equal to πrad, the best estimation with the two-qubit system is achieved when initially the NVCs are maximally entangled. Besides, the one-qubit estimation has been also analysed in detail. Especially, we show that, using a two-qubit probe, at any arbitrary time, enhances considerably the precision of estimation in comparison with one-qubit estimation.

  20. Load Estimation from Modal Parameters

    DEFF Research Database (Denmark)

    Aenlle, Manuel López; Brincker, Rune; Fernández, Pelayo Fernández

    2007-01-01

    In Natural Input Modal Analysis the modal parameters are estimated just from the responses while the loading is not recorded. However, engineers are sometimes interested in knowing some features of the loading acting on a structure. In this paper, a procedure to determine the loading from a FRF m...

  1. Parameter estimation and inverse problems

    CERN Document Server

    Aster, Richard C; Thurber, Clifford H

    2005-01-01

    Parameter Estimation and Inverse Problems primarily serves as a textbook for advanced undergraduate and introductory graduate courses. Class notes have been developed and reside on the World Wide Web for faciliting use and feedback by teaching colleagues. The authors'' treatment promotes an understanding of fundamental and practical issus associated with parameter fitting and inverse problems including basic theory of inverse problems, statistical issues, computational issues, and an understanding of how to analyze the success and limitations of solutions to these probles. The text is also a practical resource for general students and professional researchers, where techniques and concepts can be readily picked up on a chapter-by-chapter basis.Parameter Estimation and Inverse Problems is structured around a course at New Mexico Tech and is designed to be accessible to typical graduate students in the physical sciences who may not have an extensive mathematical background. It is accompanied by a Web site that...

  2. Inflation and cosmological parameter estimation

    Energy Technology Data Exchange (ETDEWEB)

    Hamann, J.

    2007-05-15

    In this work, we focus on two aspects of cosmological data analysis: inference of parameter values and the search for new effects in the inflationary sector. Constraints on cosmological parameters are commonly derived under the assumption of a minimal model. We point out that this procedure systematically underestimates errors and possibly biases estimates, due to overly restrictive assumptions. In a more conservative approach, we analyse cosmological data using a more general eleven-parameter model. We find that regions of the parameter space that were previously thought ruled out are still compatible with the data; the bounds on individual parameters are relaxed by up to a factor of two, compared to the results for the minimal six-parameter model. Moreover, we analyse a class of inflation models, in which the slow roll conditions are briefly violated, due to a step in the potential. We show that the presence of a step generically leads to an oscillating spectrum and perform a fit to CMB and galaxy clustering data. We do not find conclusive evidence for a step in the potential and derive strong bounds on quantities that parameterise the step. (orig.)

  3. Applied parameter estimation for chemical engineers

    CERN Document Server

    Englezos, Peter

    2000-01-01

    Formulation of the parameter estimation problem; computation of parameters in linear models-linear regression; Gauss-Newton method for algebraic models; other nonlinear regression methods for algebraic models; Gauss-Newton method for ordinary differential equation (ODE) models; shortcut estimation methods for ODE models; practical guidelines for algorithm implementation; constrained parameter estimation; Gauss-Newton method for partial differential equation (PDE) models; statistical inferences; design of experiments; recursive parameter estimation; parameter estimation in nonlinear thermodynam

  4. Data Handling and Parameter Estimation

    DEFF Research Database (Denmark)

    Sin, Gürkan; Gernaey, Krist

    2016-01-01

    ,engineers, and professionals. However, it is also expected that they will be useful both for graduate teaching as well as a stepping stone for academic researchers who wish to expand their theoretical interest in the subject. For the models selected to interpret the experimental data, this chapter uses available models from...... literature that are mostly based on the ActivatedSludge Model (ASM) framework and their appropriate extensions (Henze et al., 2000).The chapter presents an overview of the most commonly used methods in the estimation of parameters from experimental batch data, namely: (i) data handling and validation, (ii......Modelling is one of the key tools at the disposal of modern wastewater treatment professionals, researchers and engineers. It enables them to study and understand complex phenomena underlying the physical, chemical and biological performance of wastewater treatment plants at different temporal...

  5. Bayesian estimation applied to multiple species

    International Nuclear Information System (INIS)

    Kunz, Martin; Bassett, Bruce A.; Hlozek, Renee A.

    2007-01-01

    Observed data are often contaminated by undiscovered interlopers, leading to biased parameter estimation. Here we present BEAMS (Bayesian estimation applied to multiple species) which significantly improves on the standard maximum likelihood approach in the case where the probability for each data point being ''pure'' is known. We discuss the application of BEAMS to future type-Ia supernovae (SNIa) surveys, such as LSST, which are projected to deliver over a million supernovae light curves without spectra. The multiband light curves for each candidate will provide a probability of being Ia (pure) but the full sample will be significantly contaminated with other types of supernovae and transients. Given a sample of N supernovae with mean probability, , of being Ia, BEAMS delivers parameter constraints equal to N spectroscopically confirmed SNIa. In addition BEAMS can be simultaneously used to tease apart different families of data and to recover properties of the underlying distributions of those families (e.g. the type-Ibc and II distributions). Hence BEAMS provides a unified classification and parameter estimation methodology which may be useful in a diverse range of problems such as photometric redshift estimation or, indeed, any parameter estimation problem where contamination is an issue

  6. Parameter Estimation in Continuous Time Domain

    Directory of Open Access Journals (Sweden)

    Gabriela M. ATANASIU

    2016-12-01

    Full Text Available This paper will aim to presents the applications of a continuous-time parameter estimation method for estimating structural parameters of a real bridge structure. For the purpose of illustrating this method two case studies of a bridge pile located in a highly seismic risk area are considered, for which the structural parameters for the mass, damping and stiffness are estimated. The estimation process is followed by the validation of the analytical results and comparison with them to the measurement data. Further benefits and applications for the continuous-time parameter estimation method in civil engineering are presented in the final part of this paper.

  7. Statistics of Parameter Estimates: A Concrete Example

    KAUST Repository

    Aguilar, Oscar; Allmaras, Moritz; Bangerth, Wolfgang; Tenorio, Luis

    2015-01-01

    © 2015 Society for Industrial and Applied Mathematics. Most mathematical models include parameters that need to be determined from measurements. The estimated values of these parameters and their uncertainties depend on assumptions made about noise

  8. Parameter Estimation of Partial Differential Equation Models

    KAUST Repository

    Xun, Xiaolei; Cao, Jiguo; Mallick, Bani; Maity, Arnab; Carroll, Raymond J.

    2013-01-01

    PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus

  9. On parameter estimation in deformable models

    DEFF Research Database (Denmark)

    Fisker, Rune; Carstensen, Jens Michael

    1998-01-01

    Deformable templates have been intensively studied in image analysis through the last decade, but despite its significance the estimation of model parameters has received little attention. We present a method for supervised and unsupervised model parameter estimation using a general Bayesian form...

  10. ESTIMATION ACCURACY OF EXPONENTIAL DISTRIBUTION PARAMETERS

    Directory of Open Access Journals (Sweden)

    muhammad zahid rashid

    2011-04-01

    Full Text Available The exponential distribution is commonly used to model the behavior of units that have a constant failure rate. The two-parameter exponential distribution provides a simple but nevertheless useful model for the analysis of lifetimes, especially when investigating reliability of technical equipment.This paper is concerned with estimation of parameters of the two parameter (location and scale exponential distribution. We used the least squares method (LSM, relative least squares method (RELS, ridge regression method (RR,  moment estimators (ME, modified moment estimators (MME, maximum likelihood estimators (MLE and modified maximum likelihood estimators (MMLE. We used the mean square error MSE, and total deviation TD, as measurement for the comparison between these methods. We determined the best method for estimation using different values for the parameters and different sample sizes

  11. Multiplicity distributions in impact parameter space

    International Nuclear Information System (INIS)

    Wakano, Masami

    1976-01-01

    A definition for the average multiplicity of pions as a function of momentum transfer and total energy in the high energy proton-proton collisions is proposed by using the n-pion production differential cross section with the given momentum transfer from a proton to other final products and the given energy of the latter. Contributions from nondiffractive and diffractive processes are formulated in a multi-Regge model. We define a relationship between impact parameter and momentum transfer in the sense of classical theory for inelastic processes and we obtain the average multiplicity of pions as a function of impact parameter and total energy from the corresponding quantity afore-mentioned. By comparing this quantity with the square root of the opaqueness at given impact parameter, we conclude that the overlap of localized constituents is important in determining the opaqueness at given impact parameter in a collision of two hadrons. (auth.)

  12. Cosmological parameter estimation using Particle Swarm Optimization

    Science.gov (United States)

    Prasad, J.; Souradeep, T.

    2014-03-01

    Constraining parameters of a theoretical model from observational data is an important exercise in cosmology. There are many theoretically motivated models, which demand greater number of cosmological parameters than the standard model of cosmology uses, and make the problem of parameter estimation challenging. It is a common practice to employ Bayesian formalism for parameter estimation for which, in general, likelihood surface is probed. For the standard cosmological model with six parameters, likelihood surface is quite smooth and does not have local maxima, and sampling based methods like Markov Chain Monte Carlo (MCMC) method are quite successful. However, when there are a large number of parameters or the likelihood surface is not smooth, other methods may be more effective. In this paper, we have demonstrated application of another method inspired from artificial intelligence, called Particle Swarm Optimization (PSO) for estimating cosmological parameters from Cosmic Microwave Background (CMB) data taken from the WMAP satellite.

  13. Cosmological parameter estimation using Particle Swarm Optimization

    International Nuclear Information System (INIS)

    Prasad, J; Souradeep, T

    2014-01-01

    Constraining parameters of a theoretical model from observational data is an important exercise in cosmology. There are many theoretically motivated models, which demand greater number of cosmological parameters than the standard model of cosmology uses, and make the problem of parameter estimation challenging. It is a common practice to employ Bayesian formalism for parameter estimation for which, in general, likelihood surface is probed. For the standard cosmological model with six parameters, likelihood surface is quite smooth and does not have local maxima, and sampling based methods like Markov Chain Monte Carlo (MCMC) method are quite successful. However, when there are a large number of parameters or the likelihood surface is not smooth, other methods may be more effective. In this paper, we have demonstrated application of another method inspired from artificial intelligence, called Particle Swarm Optimization (PSO) for estimating cosmological parameters from Cosmic Microwave Background (CMB) data taken from the WMAP satellite

  14. Parameter Estimation of Partial Differential Equation Models.

    Science.gov (United States)

    Xun, Xiaolei; Cao, Jiguo; Mallick, Bani; Carroll, Raymond J; Maity, Arnab

    2013-01-01

    Partial differential equation (PDE) models are commonly used to model complex dynamic systems in applied sciences such as biology and finance. The forms of these PDE models are usually proposed by experts based on their prior knowledge and understanding of the dynamic system. Parameters in PDE models often have interesting scientific interpretations, but their values are often unknown, and need to be estimated from the measurements of the dynamic system in the present of measurement errors. Most PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus the computational load is high. In this article, we propose two methods to estimate parameters in PDE models: a parameter cascading method and a Bayesian approach. In both methods, the underlying dynamic process modeled with the PDE model is represented via basis function expansion. For the parameter cascading method, we develop two nested levels of optimization to estimate the PDE parameters. For the Bayesian method, we develop a joint model for data and the PDE, and develop a novel hierarchical model allowing us to employ Markov chain Monte Carlo (MCMC) techniques to make posterior inference. Simulation studies show that the Bayesian method and parameter cascading method are comparable, and both outperform other available methods in terms of estimation accuracy. The two methods are demonstrated by estimating parameters in a PDE model from LIDAR data.

  15. Application of spreadsheet to estimate infiltration parameters

    Directory of Open Access Journals (Sweden)

    Mohammad Zakwan

    2016-09-01

    Full Text Available Infiltration is the process of flow of water into the ground through the soil surface. Soil water although contributes a negligible fraction of total water present on earth surface, but is of utmost importance for plant life. Estimation of infiltration rates is of paramount importance for estimation of effective rainfall, groundwater recharge, and designing of irrigation systems. Numerous infiltration models are in use for estimation of infiltration rates. The conventional graphical approach for estimation of infiltration parameters often fails to estimate the infiltration parameters precisely. The generalised reduced gradient (GRG solver is reported to be a powerful tool for estimating parameters of nonlinear equations and it has, therefore, been implemented to estimate the infiltration parameters in the present paper. Field data of infiltration rate available in literature for sandy loam soils of Umuahia, Nigeria were used to evaluate the performance of GRG solver. A comparative study of graphical method and GRG solver shows that the performance of GRG solver is better than that of conventional graphical method for estimation of infiltration rates. Further, the performance of Kostiakov model has been found to be better than the Horton and Philip's model in most of the cases based on both the approaches of parameter estimation.

  16. Parameter Estimation of Nonlinear Models in Forestry.

    OpenAIRE

    Fekedulegn, Desta; Mac Siúrtáin, Máirtín Pádraig; Colbert, Jim J.

    1999-01-01

    Partial derivatives of the negative exponential, monomolecular, Mitcherlich, Gompertz, logistic, Chapman-Richards, von Bertalanffy, Weibull and the Richard’s nonlinear growth models are presented. The application of these partial derivatives in estimating the model parameters is illustrated. The parameters are estimated using the Marquardt iterative method of nonlinear regression relating top height to age of Norway spruce (Picea abies L.) from the Bowmont Norway Spruce Thinnin...

  17. Parameter Estimation of Partial Differential Equation Models

    KAUST Repository

    Xun, Xiaolei

    2013-09-01

    Partial differential equation (PDE) models are commonly used to model complex dynamic systems in applied sciences such as biology and finance. The forms of these PDE models are usually proposed by experts based on their prior knowledge and understanding of the dynamic system. Parameters in PDE models often have interesting scientific interpretations, but their values are often unknown and need to be estimated from the measurements of the dynamic system in the presence of measurement errors. Most PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus the computational load is high. In this article, we propose two methods to estimate parameters in PDE models: a parameter cascading method and a Bayesian approach. In both methods, the underlying dynamic process modeled with the PDE model is represented via basis function expansion. For the parameter cascading method, we develop two nested levels of optimization to estimate the PDE parameters. For the Bayesian method, we develop a joint model for data and the PDE and develop a novel hierarchical model allowing us to employ Markov chain Monte Carlo (MCMC) techniques to make posterior inference. Simulation studies show that the Bayesian method and parameter cascading method are comparable, and both outperform other available methods in terms of estimation accuracy. The two methods are demonstrated by estimating parameters in a PDE model from long-range infrared light detection and ranging data. Supplementary materials for this article are available online. © 2013 American Statistical Association.

  18. Reionization history and CMB parameter estimation

    International Nuclear Information System (INIS)

    Dizgah, Azadeh Moradinezhad; Kinney, William H.; Gnedin, Nickolay Y.

    2013-01-01

    We study how uncertainty in the reionization history of the universe affects estimates of other cosmological parameters from the Cosmic Microwave Background. We analyze WMAP7 data and synthetic Planck-quality data generated using a realistic scenario for the reionization history of the universe obtained from high-resolution numerical simulation. We perform parameter estimation using a simple sudden reionization approximation, and using the Principal Component Analysis (PCA) technique proposed by Mortonson and Hu. We reach two main conclusions: (1) Adopting a simple sudden reionization model does not introduce measurable bias into values for other parameters, indicating that detailed modeling of reionization is not necessary for the purpose of parameter estimation from future CMB data sets such as Planck. (2) PCA analysis does not allow accurate reconstruction of the actual reionization history of the universe in a realistic case

  19. Reionization history and CMB parameter estimation

    Energy Technology Data Exchange (ETDEWEB)

    Dizgah, Azadeh Moradinezhad; Gnedin, Nickolay Y.; Kinney, William H.

    2013-05-01

    We study how uncertainty in the reionization history of the universe affects estimates of other cosmological parameters from the Cosmic Microwave Background. We analyze WMAP7 data and synthetic Planck-quality data generated using a realistic scenario for the reionization history of the universe obtained from high-resolution numerical simulation. We perform parameter estimation using a simple sudden reionization approximation, and using the Principal Component Analysis (PCA) technique proposed by Mortonson and Hu. We reach two main conclusions: (1) Adopting a simple sudden reionization model does not introduce measurable bias into values for other parameters, indicating that detailed modeling of reionization is not necessary for the purpose of parameter estimation from future CMB data sets such as Planck. (2) PCA analysis does not allow accurate reconstruction of the actual reionization history of the universe in a realistic case.

  20. Statistics of Parameter Estimates: A Concrete Example

    KAUST Repository

    Aguilar, Oscar

    2015-01-01

    © 2015 Society for Industrial and Applied Mathematics. Most mathematical models include parameters that need to be determined from measurements. The estimated values of these parameters and their uncertainties depend on assumptions made about noise levels, models, or prior knowledge. But what can we say about the validity of such estimates, and the influence of these assumptions? This paper is concerned with methods to address these questions, and for didactic purposes it is written in the context of a concrete nonlinear parameter estimation problem. We will use the results of a physical experiment conducted by Allmaras et al. at Texas A&M University [M. Allmaras et al., SIAM Rev., 55 (2013), pp. 149-167] to illustrate the importance of validation procedures for statistical parameter estimation. We describe statistical methods and data analysis tools to check the choices of likelihood and prior distributions, and provide examples of how to compare Bayesian results with those obtained by non-Bayesian methods based on different types of assumptions. We explain how different statistical methods can be used in complementary ways to improve the understanding of parameter estimates and their uncertainties.

  1. Impacts of Different Types of Measurements on Estimating Unsaturatedflow Parameters

    Science.gov (United States)

    Shi, L.

    2015-12-01

    This study evaluates the value of different types of measurements for estimating soil hydraulic parameters. A numerical method based on ensemble Kalman filter (EnKF) is presented to solely or jointly assimilate point-scale soil water head data, point-scale soil water content data, surface soil water content data and groundwater level data. This study investigates the performance of EnKF under different types of data, the potential worth contained in these data, and the factors that may affect estimation accuracy. Results show that for all types of data, smaller measurements errors lead to faster convergence to the true values. Higher accuracy measurements are required to improve the parameter estimation if a large number of unknown parameters need to be identified simultaneously. The data worth implied by the surface soil water content data and groundwater level data is prone to corruption by a deviated initial guess. Surface soil moisture data are capable of identifying soil hydraulic parameters for the top layers, but exert less or no influence on deeper layers especially when estimating multiple parameters simultaneously. Groundwater level is one type of valuable information to infer the soil hydraulic parameters. However, based on the approach used in this study, the estimates from groundwater level data may suffer severe degradation if a large number of parameters must be identified. Combined use of two or more types of data is helpful to improve the parameter estimation.

  2. Closed-Loop Surface Related Multiple Estimation

    NARCIS (Netherlands)

    Lopez Angarita, G.A.

    2016-01-01

    Surface-related multiple elimination (SRME) is one of the most commonly used methods for suppressing surface multiples. However, in order to obtain an accurate surface multiple estimation, dense source and receiver sampling is required. The traditional approach to this problem is performing data

  3. Parameter estimation in X-ray astronomy

    International Nuclear Information System (INIS)

    Lampton, M.; Margon, B.; Bowyer, S.

    1976-01-01

    The problems of model classification and parameter estimation are examined, with the objective of establishing the statistical reliability of inferences drawn from X-ray observations. For testing the validities of classes of models, the procedure based on minimizing the chi 2 statistic is recommended; it provides a rejection criterion at any desired significance level. Once a class of models has been accepted, a related procedure based on the increase of chi 2 gives a confidence region for the values of the model's adjustable parameters. The procedure allows the confidence level to be chosen exactly, even for highly nonlinear models. Numerical experiments confirm the validity of the prescribed technique.The chi 2 /sub min/+1 error estimation method is evaluated and found unsuitable when several parameter ranges are to be derived, because it substantially underestimates their joint errors. The ratio of variances method, while formally correct, gives parameter confidence regions which are more variable than necessary

  4. Parameter Estimation for Thurstone Choice Models

    Energy Technology Data Exchange (ETDEWEB)

    Vojnovic, Milan [London School of Economics (United Kingdom); Yun, Seyoung [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-04-24

    We consider the estimation accuracy of individual strength parameters of a Thurstone choice model when each input observation consists of a choice of one item from a set of two or more items (so called top-1 lists). This model accommodates the well-known choice models such as the Luce choice model for comparison sets of two or more items and the Bradley-Terry model for pair comparisons. We provide a tight characterization of the mean squared error of the maximum likelihood parameter estimator. We also provide similar characterizations for parameter estimators defined by a rank-breaking method, which amounts to deducing one or more pair comparisons from a comparison of two or more items, assuming independence of these pair comparisons, and maximizing a likelihood function derived under these assumptions. We also consider a related binary classification problem where each individual parameter takes value from a set of two possible values and the goal is to correctly classify all items within a prescribed classification error. The results of this paper shed light on how the parameter estimation accuracy depends on given Thurstone choice model and the structure of comparison sets. In particular, we found that for unbiased input comparison sets of a given cardinality, when in expectation each comparison set of given cardinality occurs the same number of times, for a broad class of Thurstone choice models, the mean squared error decreases with the cardinality of comparison sets, but only marginally according to a diminishing returns relation. On the other hand, we found that there exist Thurstone choice models for which the mean squared error of the maximum likelihood parameter estimator can decrease much faster with the cardinality of comparison sets. We report empirical evaluation of some claims and key parameters revealed by theory using both synthetic and real-world input data from some popular sport competitions and online labor platforms.

  5. Multi-Parameter Estimation for Orthorhombic Media

    KAUST Repository

    Masmoudi, Nabil; Alkhalifah, Tariq Ali

    2015-01-01

    Building reliable anisotropy models is crucial in seismic modeling, imaging and full waveform inversion. However, estimating anisotropy parameters is often hampered by the trade off between inhomogeneity and anisotropy. For instance, one way to estimate the anisotropy parameters is to relate them analytically to traveltimes, which is challenging in inhomogeneous media. Using perturbation theory, we develop travel-time approximations for orthorhombic media as explicit functions of the anellipticity parameters η1, η2 and a parameter Δγ in inhomogeneous background media. Specifically, our expansion assumes inhomogeneous ellipsoidal anisotropic background model, which can be obtained from well information and stacking velocity analysis. This approach has two main advantages: in one hand, it provides a computationally efficient tool to solve the orthorhombic eikonal equation, on the other hand, it provides a mechanism to scan for the best fitting anisotropy parameters without the need for repetitive modeling of traveltimes, because the coefficients of the traveltime expansion are independent of the perturbed parameters. Furthermore, the coefficients of the traveltime expansion provide insights on the sensitivity of the traveltime with respect to the perturbed parameters. We show the accuracy of the traveltime approximations as well as an approach for multi-parameter scanning in orthorhombic media.

  6. Multi-Parameter Estimation for Orthorhombic Media

    KAUST Repository

    Masmoudi, Nabil

    2015-08-19

    Building reliable anisotropy models is crucial in seismic modeling, imaging and full waveform inversion. However, estimating anisotropy parameters is often hampered by the trade off between inhomogeneity and anisotropy. For instance, one way to estimate the anisotropy parameters is to relate them analytically to traveltimes, which is challenging in inhomogeneous media. Using perturbation theory, we develop travel-time approximations for orthorhombic media as explicit functions of the anellipticity parameters η1, η2 and a parameter Δγ in inhomogeneous background media. Specifically, our expansion assumes inhomogeneous ellipsoidal anisotropic background model, which can be obtained from well information and stacking velocity analysis. This approach has two main advantages: in one hand, it provides a computationally efficient tool to solve the orthorhombic eikonal equation, on the other hand, it provides a mechanism to scan for the best fitting anisotropy parameters without the need for repetitive modeling of traveltimes, because the coefficients of the traveltime expansion are independent of the perturbed parameters. Furthermore, the coefficients of the traveltime expansion provide insights on the sensitivity of the traveltime with respect to the perturbed parameters. We show the accuracy of the traveltime approximations as well as an approach for multi-parameter scanning in orthorhombic media.

  7. Bayesian estimation of Weibull distribution parameters

    International Nuclear Information System (INIS)

    Bacha, M.; Celeux, G.; Idee, E.; Lannoy, A.; Vasseur, D.

    1994-11-01

    In this paper, we expose SEM (Stochastic Expectation Maximization) and WLB-SIR (Weighted Likelihood Bootstrap - Sampling Importance Re-sampling) methods which are used to estimate Weibull distribution parameters when data are very censored. The second method is based on Bayesian inference and allow to take into account available prior informations on parameters. An application of this method, with real data provided by nuclear power plants operation feedback analysis has been realized. (authors). 8 refs., 2 figs., 2 tabs

  8. Iterative importance sampling algorithms for parameter estimation

    OpenAIRE

    Morzfeld, Matthias; Day, Marcus S.; Grout, Ray W.; Pau, George Shu Heng; Finsterle, Stefan A.; Bell, John B.

    2016-01-01

    In parameter estimation problems one computes a posterior distribution over uncertain parameters defined jointly by a prior distribution, a model, and noisy data. Markov Chain Monte Carlo (MCMC) is often used for the numerical solution of such problems. An alternative to MCMC is importance sampling, which can exhibit near perfect scaling with the number of cores on high performance computing systems because samples are drawn independently. However, finding a suitable proposal distribution is ...

  9. Bayesian parameter estimation in probabilistic risk assessment

    International Nuclear Information System (INIS)

    Siu, Nathan O.; Kelly, Dana L.

    1998-01-01

    Bayesian statistical methods are widely used in probabilistic risk assessment (PRA) because of their ability to provide useful estimates of model parameters when data are sparse and because the subjective probability framework, from which these methods are derived, is a natural framework to address the decision problems motivating PRA. This paper presents a tutorial on Bayesian parameter estimation especially relevant to PRA. It summarizes the philosophy behind these methods, approaches for constructing likelihood functions and prior distributions, some simple but realistic examples, and a variety of cautions and lessons regarding practical applications. References are also provided for more in-depth coverage of various topics

  10. Robust estimation of hydrological model parameters

    Directory of Open Access Journals (Sweden)

    A. Bárdossy

    2008-11-01

    Full Text Available The estimation of hydrological model parameters is a challenging task. With increasing capacity of computational power several complex optimization algorithms have emerged, but none of the algorithms gives a unique and very best parameter vector. The parameters of fitted hydrological models depend upon the input data. The quality of input data cannot be assured as there may be measurement errors for both input and state variables. In this study a methodology has been developed to find a set of robust parameter vectors for a hydrological model. To see the effect of observational error on parameters, stochastically generated synthetic measurement errors were applied to observed discharge and temperature data. With this modified data, the model was calibrated and the effect of measurement errors on parameters was analysed. It was found that the measurement errors have a significant effect on the best performing parameter vector. The erroneous data led to very different optimal parameter vectors. To overcome this problem and to find a set of robust parameter vectors, a geometrical approach based on Tukey's half space depth was used. The depth of the set of N randomly generated parameters was calculated with respect to the set with the best model performance (Nash-Sutclife efficiency was used for this study for each parameter vector. Based on the depth of parameter vectors, one can find a set of robust parameter vectors. The results show that the parameters chosen according to the above criteria have low sensitivity and perform well when transfered to a different time period. The method is demonstrated on the upper Neckar catchment in Germany. The conceptual HBV model was used for this study.

  11. MCMC for parameters estimation by bayesian approach

    International Nuclear Information System (INIS)

    Ait Saadi, H.; Ykhlef, F.; Guessoum, A.

    2011-01-01

    This article discusses the parameter estimation for dynamic system by a Bayesian approach associated with Markov Chain Monte Carlo methods (MCMC). The MCMC methods are powerful for approximating complex integrals, simulating joint distributions, and the estimation of marginal posterior distributions, or posterior means. The MetropolisHastings algorithm has been widely used in Bayesian inference to approximate posterior densities. Calibrating the proposal distribution is one of the main issues of MCMC simulation in order to accelerate the convergence.

  12. Precision Parameter Estimation and Machine Learning

    Science.gov (United States)

    Wandelt, Benjamin D.

    2008-12-01

    I discuss the strategy of ``Acceleration by Parallel Precomputation and Learning'' (AP-PLe) that can vastly accelerate parameter estimation in high-dimensional parameter spaces and costly likelihood functions, using trivially parallel computing to speed up sequential exploration of parameter space. This strategy combines the power of distributed computing with machine learning and Markov-Chain Monte Carlo techniques efficiently to explore a likelihood function, posterior distribution or χ2-surface. This strategy is particularly successful in cases where computing the likelihood is costly and the number of parameters is moderate or large. We apply this technique to two central problems in cosmology: the solution of the cosmological parameter estimation problem with sufficient accuracy for the Planck data using PICo; and the detailed calculation of cosmological helium and hydrogen recombination with RICO. Since the APPLe approach is designed to be able to use massively parallel resources to speed up problems that are inherently serial, we can bring the power of distributed computing to bear on parameter estimation problems. We have demonstrated this with the CosmologyatHome project.

  13. Measuring, calculating and estimating PEP's parasitic mode loss parameters

    International Nuclear Information System (INIS)

    Weaver, J.N.

    1981-01-01

    This note discusses various ways the parasitic mode losses from a bunched beam to a vacuum chamber can be measured, calculated or estimated. A listing of the parameter, k, for the various PEP ring components is included. A number of formulas for calculating multiple and single pass losses are discussed and evaluated for several cases. 25 refs., 1 fig., 1 tab

  14. Some Properties of Multiple Parameters Linear Programming

    Directory of Open Access Journals (Sweden)

    Maoqin Li

    2010-01-01

    Full Text Available We consider a linear programming problem in which the right-hand side vector depends on multiple parameters. We study the characters of the optimal value function and the critical regions based on the concept of the optimal partition. We show that the domain of the optimal value function f can be decomposed into finitely many subsets with disjoint relative interiors, which is different from the result based on the concept of the optimal basis. And any directional derivative of f at any point can be computed by solving a linear programming problem when only an optimal solution is available at the point.

  15. Some Properties of Multiple Parameters Linear Programming

    Directory of Open Access Journals (Sweden)

    Yan Hong

    2010-01-01

    Full Text Available Abstract We consider a linear programming problem in which the right-hand side vector depends on multiple parameters. We study the characters of the optimal value function and the critical regions based on the concept of the optimal partition. We show that the domain of the optimal value function can be decomposed into finitely many subsets with disjoint relative interiors, which is different from the result based on the concept of the optimal basis. And any directional derivative of at any point can be computed by solving a linear programming problem when only an optimal solution is available at the point.

  16. Parameter estimation for an expanding universe

    Directory of Open Access Journals (Sweden)

    Jieci Wang

    2015-03-01

    Full Text Available We study the parameter estimation for excitations of Dirac fields in the expanding Robertson–Walker universe. We employ quantum metrology techniques to demonstrate the possibility for high precision estimation for the volume rate of the expanding universe. We show that the optimal precision of the estimation depends sensitively on the dimensionless mass m˜ and dimensionless momentum k˜ of the Dirac particles. The optimal precision for the ratio estimation peaks at some finite dimensionless mass m˜ and momentum k˜. We find that the precision of the estimation can be improved by choosing the probe state as an eigenvector of the hamiltonian. This occurs because the largest quantum Fisher information is obtained by performing projective measurements implemented by the projectors onto the eigenvectors of specific probe states.

  17. Nonparametric estimation of location and scale parameters

    KAUST Repository

    Potgieter, C.J.; Lombard, F.

    2012-01-01

    Two random variables X and Y belong to the same location-scale family if there are constants μ and σ such that Y and μ+σX have the same distribution. In this paper we consider non-parametric estimation of the parameters μ and σ under minimal

  18. Sensor Placement for Modal Parameter Subset Estimation

    DEFF Research Database (Denmark)

    Ulriksen, Martin Dalgaard; Bernal, Dionisio; Damkilde, Lars

    2016-01-01

    The present paper proposes an approach for deciding on sensor placements in the context of modal parameter estimation from vibration measurements. The approach is based on placing sensors, of which the amount is determined a priori, such that the minimum Fisher information that the frequency resp...

  19. Postprocessing MPEG based on estimated quantization parameters

    DEFF Research Database (Denmark)

    Forchhammer, Søren

    2009-01-01

    the case where the coded stream is not accessible, or from an architectural point of view not desirable to use, and instead estimate some of the MPEG stream parameters based on the decoded sequence. The I-frames are detected and the quantization parameters are estimated from the coded stream and used...... in the postprocessing. We focus on deringing and present a scheme which aims at suppressing ringing artifacts, while maintaining the sharpness of the texture. The goal is to improve the visual quality, so perceptual blur and ringing metrics are used in addition to PSNR evaluation. The performance of the new `pure......' postprocessing compares favorable to a reference postprocessing filter which has access to the quantization parameters not only for I-frames but also on P and B-frames....

  20. Estimating physiological skin parameters from hyperspectral signatures

    Science.gov (United States)

    Vyas, Saurabh; Banerjee, Amit; Burlina, Philippe

    2013-05-01

    We describe an approach for estimating human skin parameters, such as melanosome concentration, collagen concentration, oxygen saturation, and blood volume, using hyperspectral radiometric measurements (signatures) obtained from in vivo skin. We use a computational model based on Kubelka-Munk theory and the Fresnel equations. This model forward maps the skin parameters to a corresponding multiband reflectance spectra. Machine-learning-based regression is used to generate the inverse map, and hence estimate skin parameters from hyperspectral signatures. We test our methods using synthetic and in vivo skin signatures obtained in the visible through the short wave infrared domains from 24 patients of both genders and Caucasian, Asian, and African American ethnicities. Performance validation shows promising results: good agreement with the ground truth and well-established physiological precepts. These methods have potential use in the characterization of skin abnormalities and in minimally-invasive prescreening of malignant skin cancers.

  1. Parameter estimation in stochastic differential equations

    CERN Document Server

    Bishwal, Jaya P N

    2008-01-01

    Parameter estimation in stochastic differential equations and stochastic partial differential equations is the science, art and technology of modelling complex phenomena and making beautiful decisions. The subject has attracted researchers from several areas of mathematics and other related fields like economics and finance. This volume presents the estimation of the unknown parameters in the corresponding continuous models based on continuous and discrete observations and examines extensively maximum likelihood, minimum contrast and Bayesian methods. Useful because of the current availability of high frequency data is the study of refined asymptotic properties of several estimators when the observation time length is large and the observation time interval is small. Also space time white noise driven models, useful for spatial data, and more sophisticated non-Markovian and non-semimartingale models like fractional diffusions that model the long memory phenomena are examined in this volume.

  2. Low Complexity Parameter Estimation For Off-the-Grid Targets

    KAUST Repository

    Jardak, Seifallah

    2015-10-05

    In multiple-input multiple-output radar, to estimate the reflection coefficient, spatial location, and Doppler shift of a target, a derived cost function is usually evaluated and optimized over a grid of points. The performance of such algorithms is directly affected by the size of the grid: increasing the number of points will enhance the resolution of the algorithm but exponentially increase its complexity. In this work, to estimate the parameters of a target, a reduced complexity super resolution algorithm is proposed. For off-the-grid targets, it uses a low order two dimensional fast Fourier transform to determine a suboptimal solution and then an iterative algorithm to jointly estimate the spatial location and Doppler shift. Simulation results show that the mean square estimation error of the proposed estimators achieve the Cram\\'er-Rao lower bound. © 2015 IEEE.

  3. Parameter Estimation in Stochastic Grey-Box Models

    DEFF Research Database (Denmark)

    Kristensen, Niels Rode; Madsen, Henrik; Jørgensen, Sten Bay

    2004-01-01

    An efficient and flexible parameter estimation scheme for grey-box models in the sense of discretely, partially observed Ito stochastic differential equations with measurement noise is presented along with a corresponding software implementation. The estimation scheme is based on the extended...... Kalman filter and features maximum likelihood as well as maximum a posteriori estimation on multiple independent data sets, including irregularly sampled data sets and data sets with occasional outliers and missing observations. The software implementation is compared to an existing software tool...... and proves to have better performance both in terms of quality of estimates for nonlinear systems with significant diffusion and in terms of reproducibility. In particular, the new tool provides more accurate and more consistent estimates of the parameters of the diffusion term....

  4. Nonparametric estimation of location and scale parameters

    KAUST Repository

    Potgieter, C.J.

    2012-12-01

    Two random variables X and Y belong to the same location-scale family if there are constants μ and σ such that Y and μ+σX have the same distribution. In this paper we consider non-parametric estimation of the parameters μ and σ under minimal assumptions regarding the form of the distribution functions of X and Y. We discuss an approach to the estimation problem that is based on asymptotic likelihood considerations. Our results enable us to provide a methodology that can be implemented easily and which yields estimators that are often near optimal when compared to fully parametric methods. We evaluate the performance of the estimators in a series of Monte Carlo simulations. © 2012 Elsevier B.V. All rights reserved.

  5. Estimating RASATI scores using acoustical parameters

    International Nuclear Information System (INIS)

    Agüero, P D; Tulli, J C; Moscardi, G; Gonzalez, E L; Uriz, A J

    2011-01-01

    Acoustical analysis of speech using computers has reached an important development in the latest years. The subjective evaluation of a clinician is complemented with an objective measure of relevant parameters of voice. Praat, MDVP (Multi Dimensional Voice Program) and SAV (Software for Voice Analysis) are some examples of software for speech analysis. This paper describes an approach to estimate the subjective characteristics of RASATI scale given objective acoustical parameters. Two approaches were used: linear regression with non-negativity constraints, and neural networks. The experiments show that such approach gives correct evaluations with ±1 error in 80% of the cases.

  6. Cosmological parameter estimation using particle swarm optimization

    Science.gov (United States)

    Prasad, Jayanti; Souradeep, Tarun

    2012-06-01

    Constraining theoretical models, which are represented by a set of parameters, using observational data is an important exercise in cosmology. In Bayesian framework this is done by finding the probability distribution of parameters which best fits to the observational data using sampling based methods like Markov chain Monte Carlo (MCMC). It has been argued that MCMC may not be the best option in certain problems in which the target function (likelihood) poses local maxima or have very high dimensionality. Apart from this, there may be examples in which we are mainly interested to find the point in the parameter space at which the probability distribution has the largest value. In this situation the problem of parameter estimation becomes an optimization problem. In the present work we show that particle swarm optimization (PSO), which is an artificial intelligence inspired population based search procedure, can also be used for cosmological parameter estimation. Using PSO we were able to recover the best-fit Λ cold dark matter (LCDM) model parameters from the WMAP seven year data without using any prior guess value or any other property of the probability distribution of parameters like standard deviation, as is common in MCMC. We also report the results of an exercise in which we consider a binned primordial power spectrum (to increase the dimensionality of problem) and find that a power spectrum with features gives lower chi square than the standard power law. Since PSO does not sample the likelihood surface in a fair way, we follow a fitting procedure to find the spread of likelihood function around the best-fit point.

  7. Optimal design criteria - prediction vs. parameter estimation

    Science.gov (United States)

    Waldl, Helmut

    2014-05-01

    G-optimality is a popular design criterion for optimal prediction, it tries to minimize the kriging variance over the whole design region. A G-optimal design minimizes the maximum variance of all predicted values. If we use kriging methods for prediction it is self-evident to use the kriging variance as a measure of uncertainty for the estimates. Though the computation of the kriging variance and even more the computation of the empirical kriging variance is computationally very costly and finding the maximum kriging variance in high-dimensional regions can be time demanding such that we cannot really find the G-optimal design with nowadays available computer equipment in practice. We cannot always avoid this problem by using space-filling designs because small designs that minimize the empirical kriging variance are often non-space-filling. D-optimality is the design criterion related to parameter estimation. A D-optimal design maximizes the determinant of the information matrix of the estimates. D-optimality in terms of trend parameter estimation and D-optimality in terms of covariance parameter estimation yield basically different designs. The Pareto frontier of these two competing determinant criteria corresponds with designs that perform well under both criteria. Under certain conditions searching the G-optimal design on the above Pareto frontier yields almost as good results as searching the G-optimal design in the whole design region. In doing so the maximum of the empirical kriging variance has to be computed only a few times though. The method is demonstrated by means of a computer simulation experiment based on data provided by the Belgian institute Management Unit of the North Sea Mathematical Models (MUMM) that describe the evolution of inorganic and organic carbon and nutrients, phytoplankton, bacteria and zooplankton in the Southern Bight of the North Sea.

  8. Variational estimates of point-kinetics parameters

    International Nuclear Information System (INIS)

    Favorite, J.A.; Stacey, W.M. Jr.

    1995-01-01

    Variational estimates of the effect of flux shifts on the integral reactivity parameter of the point-kinetics equations and on regional power fractions were calculated for a variety of localized perturbations in two light water reactor (LWR) model problems representing a small, tightly coupled core and a large, loosely coupled core. For the small core, the flux shifts resulting from even relatively large localized reactivity changes (∼600 pcm) were small, and the standard point-kinetics approximation estimates of reactivity were in error by only ∼10% or less, while the variational estimates were accurate to within ∼1%. For the larger core, significant (>50%) flux shifts occurred in response to local perturbations, leading to errors of the same magnitude in the standard point-kinetics approximation of the reactivity worth. For positive reactivity, the error in the variational estimate of reactivity was only a few percent in the larger core, and the resulting transient power prediction was 1 to 2 orders of magnitude more accurate than with the standard point-kinetics approximation. For a large, local negative reactivity insertion resulting in a large flux shift, the accuracy of the variational estimate broke down. The variational estimate of the effect of flux shifts on reactivity in point-kinetics calculations of transients in LWR cores was found to generally result in greatly improved accuracy, relative to the standard point-kinetics approximation, the exception being for large negative reactivity insertions with large flux shifts in large, loosely coupled cores

  9. PARAMETER ESTIMATION IN BREAD BAKING MODEL

    Directory of Open Access Journals (Sweden)

    Hadiyanto Hadiyanto

    2012-05-01

    Full Text Available Bread product quality is highly dependent to the baking process. A model for the development of product quality, which was obtained by using quantitative and qualitative relationships, was calibrated by experiments at a fixed baking temperature of 200°C alone and in combination with 100 W microwave powers. The model parameters were estimated in a stepwise procedure i.e. first, heat and mass transfer related parameters, then the parameters related to product transformations and finally product quality parameters. There was a fair agreement between the calibrated model results and the experimental data. The results showed that the applied simple qualitative relationships for quality performed above expectation. Furthermore, it was confirmed that the microwave input is most meaningful for the internal product properties and not for the surface properties as crispness and color. The model with adjusted parameters was applied in a quality driven food process design procedure to derive a dynamic operation pattern, which was subsequently tested experimentally to calibrate the model. Despite the limited calibration with fixed operation settings, the model predicted well on the behavior under dynamic convective operation and on combined convective and microwave operation. It was expected that the suitability between model and baking system could be improved further by performing calibration experiments at higher temperature and various microwave power levels.  Abstrak  PERKIRAAN PARAMETER DALAM MODEL UNTUK PROSES BAKING ROTI. Kualitas produk roti sangat tergantung pada proses baking yang digunakan. Suatu model yang telah dikembangkan dengan metode kualitatif dan kuantitaif telah dikalibrasi dengan percobaan pada temperatur 200oC dan dengan kombinasi dengan mikrowave pada 100 Watt. Parameter-parameter model diestimasi dengan prosedur bertahap yaitu pertama, parameter pada model perpindahan masa dan panas, parameter pada model transformasi, dan

  10. Parameter estimation in tree graph metabolic networks

    Directory of Open Access Journals (Sweden)

    Laura Astola

    2016-09-01

    Full Text Available We study the glycosylation processes that convert initially toxic substrates to nutritionally valuable metabolites in the flavonoid biosynthesis pathway of tomato (Solanum lycopersicum seedlings. To estimate the reaction rates we use ordinary differential equations (ODEs to model the enzyme kinetics. A popular choice is to use a system of linear ODEs with constant kinetic rates or to use Michaelis–Menten kinetics. In reality, the catalytic rates, which are affected among other factors by kinetic constants and enzyme concentrations, are changing in time and with the approaches just mentioned, this phenomenon cannot be described. Another problem is that, in general these kinetic coefficients are not always identifiable. A third problem is that, it is not precisely known which enzymes are catalyzing the observed glycosylation processes. With several hundred potential gene candidates, experimental validation using purified target proteins is expensive and time consuming. We aim at reducing this task via mathematical modeling to allow for the pre-selection of most potential gene candidates. In this article we discuss a fast and relatively simple approach to estimate time varying kinetic rates, with three favorable properties: firstly, it allows for identifiable estimation of time dependent parameters in networks with a tree-like structure. Secondly, it is relatively fast compared to usually applied methods that estimate the model derivatives together with the network parameters. Thirdly, by combining the metabolite concentration data with a corresponding microarray data, it can help in detecting the genes related to the enzymatic processes. By comparing the estimated time dynamics of the catalytic rates with time series gene expression data we may assess potential candidate genes behind enzymatic reactions. As an example, we show how to apply this method to select prominent glycosyltransferase genes in tomato seedlings.

  11. Parameter estimation in tree graph metabolic networks.

    Science.gov (United States)

    Astola, Laura; Stigter, Hans; Gomez Roldan, Maria Victoria; van Eeuwijk, Fred; Hall, Robert D; Groenenboom, Marian; Molenaar, Jaap J

    2016-01-01

    We study the glycosylation processes that convert initially toxic substrates to nutritionally valuable metabolites in the flavonoid biosynthesis pathway of tomato (Solanum lycopersicum) seedlings. To estimate the reaction rates we use ordinary differential equations (ODEs) to model the enzyme kinetics. A popular choice is to use a system of linear ODEs with constant kinetic rates or to use Michaelis-Menten kinetics. In reality, the catalytic rates, which are affected among other factors by kinetic constants and enzyme concentrations, are changing in time and with the approaches just mentioned, this phenomenon cannot be described. Another problem is that, in general these kinetic coefficients are not always identifiable. A third problem is that, it is not precisely known which enzymes are catalyzing the observed glycosylation processes. With several hundred potential gene candidates, experimental validation using purified target proteins is expensive and time consuming. We aim at reducing this task via mathematical modeling to allow for the pre-selection of most potential gene candidates. In this article we discuss a fast and relatively simple approach to estimate time varying kinetic rates, with three favorable properties: firstly, it allows for identifiable estimation of time dependent parameters in networks with a tree-like structure. Secondly, it is relatively fast compared to usually applied methods that estimate the model derivatives together with the network parameters. Thirdly, by combining the metabolite concentration data with a corresponding microarray data, it can help in detecting the genes related to the enzymatic processes. By comparing the estimated time dynamics of the catalytic rates with time series gene expression data we may assess potential candidate genes behind enzymatic reactions. As an example, we show how to apply this method to select prominent glycosyltransferase genes in tomato seedlings.

  12. Parameter estimation for lithium ion batteries

    Science.gov (United States)

    Santhanagopalan, Shriram

    With an increase in the demand for lithium based batteries at the rate of about 7% per year, the amount of effort put into improving the performance of these batteries from both experimental and theoretical perspectives is increasing. There exist a number of mathematical models ranging from simple empirical models to complicated physics-based models to describe the processes leading to failure of these cells. The literature is also rife with experimental studies that characterize the various properties of the system in an attempt to improve the performance of lithium ion cells. However, very little has been done to quantify the experimental observations and relate these results to the existing mathematical models. In fact, the best of the physics based models in the literature show as much as 20% discrepancy when compared to experimental data. The reasons for such a big difference include, but are not limited to, numerical complexities involved in extracting parameters from experimental data and inconsistencies in interpreting directly measured values for the parameters. In this work, an attempt has been made to implement simplified models to extract parameter values that accurately characterize the performance of lithium ion cells. The validity of these models under a variety of experimental conditions is verified using a model discrimination procedure. Transport and kinetic properties are estimated using a non-linear estimation procedure. The initial state of charge inside each electrode is also maintained as an unknown parameter, since this value plays a significant role in accurately matching experimental charge/discharge curves with model predictions and is not readily known from experimental data. The second part of the dissertation focuses on parameters that change rapidly with time. For example, in the case of lithium ion batteries used in Hybrid Electric Vehicle (HEV) applications, the prediction of the State of Charge (SOC) of the cell under a variety of

  13. Propagation channel characterization, parameter estimation, and modeling for wireless communications

    CERN Document Server

    Yin, Xuefeng

    2016-01-01

    Thoroughly covering channel characteristics and parameters, this book provides the knowledge needed to design various wireless systems, such as cellular communication systems, RFID and ad hoc wireless communication systems. It gives a detailed introduction to aspects of channels before presenting the novel estimation and modelling techniques which can be used to achieve accurate models. To systematically guide readers through the topic, the book is organised in three distinct parts. The first part covers the fundamentals of the characterization of propagation channels, including the conventional single-input single-output (SISO) propagation channel characterization as well as its extension to multiple-input multiple-output (MIMO) cases. Part two focuses on channel measurements and channel data post-processing. Wideband channel measurements are introduced, including the equipment, technology and advantages and disadvantages of different data acquisition schemes. The channel parameter estimation methods are ...

  14. Composite likelihood estimation of demographic parameters

    Directory of Open Access Journals (Sweden)

    Garrigan Daniel

    2009-11-01

    Full Text Available Abstract Background Most existing likelihood-based methods for fitting historical demographic models to DNA sequence polymorphism data to do not scale feasibly up to the level of whole-genome data sets. Computational economies can be achieved by incorporating two forms of pseudo-likelihood: composite and approximate likelihood methods. Composite likelihood enables scaling up to large data sets because it takes the product of marginal likelihoods as an estimator of the likelihood of the complete data set. This approach is especially useful when a large number of genomic regions constitutes the data set. Additionally, approximate likelihood methods can reduce the dimensionality of the data by summarizing the information in the original data by either a sufficient statistic, or a set of statistics. Both composite and approximate likelihood methods hold promise for analyzing large data sets or for use in situations where the underlying demographic model is complex and has many parameters. This paper considers a simple demographic model of allopatric divergence between two populations, in which one of the population is hypothesized to have experienced a founder event, or population bottleneck. A large resequencing data set from human populations is summarized by the joint frequency spectrum, which is a matrix of the genomic frequency spectrum of derived base frequencies in two populations. A Bayesian Metropolis-coupled Markov chain Monte Carlo (MCMCMC method for parameter estimation is developed that uses both composite and likelihood methods and is applied to the three different pairwise combinations of the human population resequence data. The accuracy of the method is also tested on data sets sampled from a simulated population model with known parameters. Results The Bayesian MCMCMC method also estimates the ratio of effective population size for the X chromosome versus that of the autosomes. The method is shown to estimate, with reasonable

  15. Preliminary Estimation of Kappa Parameter in Croatia

    Science.gov (United States)

    Stanko, Davor; Markušić, Snježana; Ivančić, Ines; Mario, Gazdek; Gülerce, Zeynep

    2017-12-01

    Spectral parameter kappa κ is used to describe spectral amplitude decay “crash syndrome” at high frequencies. The purpose of this research is to estimate spectral parameter kappa for the first time in Croatia based on small and moderate earthquakes. Recordings of local earthquakes with magnitudes higher than 3, epicentre distances less than 150 km, and focal depths less than 30 km from seismological stations in Croatia are used. The value of kappa was estimated from the acceleration amplitude spectrum of shear waves from the slope of the high-frequency part where the spectrum starts to decay rapidly to a noise floor. Kappa models as a function of a site and distance were derived from a standard linear regression of kappa-distance dependence. Site kappa was determined from the extrapolation of the regression line to a zero distance. The preliminary results of site kappa across Croatia are promising. In this research, these results are compared with local site condition parameters for each station, e.g. shear wave velocity in the upper 30 m from geophysical measurements and with existing global shear wave velocity - site kappa values. Spatial distribution of individual kappa’s is compared with the azimuthal distribution of earthquake epicentres. These results are significant for a couple of reasons: to extend the knowledge of the attenuation of near-surface crust layers of the Dinarides and to provide additional information on the local earthquake parameters for updating seismic hazard maps of studied area. Site kappa can be used in the re-creation, and re-calibration of attenuation of peak horizontal and/or vertical acceleration in the Dinarides area since information on the local site conditions were not included in the previous studies.

  16. Parameter estimation techniques for LTP system identification

    Science.gov (United States)

    Nofrarias Serra, Miquel

    LISA Pathfinder (LPF) is the precursor mission of LISA (Laser Interferometer Space Antenna) and the first step towards gravitational waves detection in space. The main instrument onboard the mission is the LTP (LISA Technology Package) whose scientific goal is to test LISA's drag-free control loop by reaching a differential acceleration noise level between two masses in √ geodesic motion of 3 × 10-14 ms-2 / Hz in the milliHertz band. The mission is not only challenging in terms of technology readiness but also in terms of data analysis. As with any gravitational wave detector, attaining the instrument performance goals will require an extensive noise hunting campaign to measure all contributions with high accuracy. But, opposite to on-ground experiments, LTP characterisation will be only possible by setting parameters via telecommands and getting a selected amount of information through the available telemetry downlink. These two conditions, high accuracy and high reliability, are the main restrictions that the LTP data analysis must overcome. A dedicated object oriented Matlab Toolbox (LTPDA) has been set up by the LTP analysis team for this purpose. Among the different toolbox methods, an essential part for the mission are the parameter estimation tools that will be used for system identification during operations: Linear Least Squares, Non-linear Least Squares and Monte Carlo Markov Chain methods have been implemented as LTPDA methods. The data analysis team has been testing those methods with a series of mock data exercises with the following objectives: to cross-check parameter estimation methods and compare the achievable accuracy for each of them, and to develop the best strategies to describe the physics underlying a complex controlled experiment as the LTP. In this contribution we describe how these methods were tested with simulated LTP-like data to recover the parameters of the model and we report on the latest results of these mock data exercises.

  17. Statistical distributions applications and parameter estimates

    CERN Document Server

    Thomopoulos, Nick T

    2017-01-01

    This book gives a description of the group of statistical distributions that have ample application to studies in statistics and probability.  Understanding statistical distributions is fundamental for researchers in almost all disciplines.  The informed researcher will select the statistical distribution that best fits the data in the study at hand.  Some of the distributions are well known to the general researcher and are in use in a wide variety of ways.  Other useful distributions are less understood and are not in common use.  The book describes when and how to apply each of the distributions in research studies, with a goal to identify the distribution that best applies to the study.  The distributions are for continuous, discrete, and bivariate random variables.  In most studies, the parameter values are not known a priori, and sample data is needed to estimate parameter values.  In other scenarios, no sample data is available, and the researcher seeks some insight that allows the estimate of ...

  18. Statistical estimation of nuclear reactor dynamic parameters

    International Nuclear Information System (INIS)

    Cummins, J.D.

    1962-02-01

    This report discusses the study of the noise in nuclear reactors and associated power plant. The report is divided into three distinct parts. In the first part parameters which influence the dynamic behaviour of some reactors will be specified and their effect on dynamic performance described. Methods of estimating dynamic parameters using statistical signals will be described in detail together with descriptions of the usefulness of the results, the accuracy and related topics. Some experiments which have been and which might be performed on nuclear reactors will be described. In the second part of the report a digital computer programme will be described. The computer programme derives the correlation functions and the spectra of signals. The programme will compute the frequency response both gain and phase for physical items of plant for which simultaneous recordings of input and output signal variations have been made. Estimations of the accuracy of the correlation functions and the spectra may be computed using the programme and the amplitude distribution of signals may also b computed. The programme is written in autocode for the Ferranti Mercury computer. In the third part of the report a practical example of the use of the method and the digital programme is presented. In order to eliminate difficulties of interpretation a very simple plant model was chosen i.e. a simple first order lag. Several interesting properties of statistical signals were measured and will be discussed. (author)

  19. Parameter Estimation of Spacecraft Fuel Slosh Model

    Science.gov (United States)

    Gangadharan, Sathya; Sudermann, James; Marlowe, Andrea; Njengam Charles

    2004-01-01

    Fuel slosh in the upper stages of a spinning spacecraft during launch has been a long standing concern for the success of a space mission. Energy loss through the movement of the liquid fuel in the fuel tank affects the gyroscopic stability of the spacecraft and leads to nutation (wobble) which can cause devastating control issues. The rate at which nutation develops (defined by Nutation Time Constant (NTC can be tedious to calculate and largely inaccurate if done during the early stages of spacecraft design. Pure analytical means of predicting the influence of onboard liquids have generally failed. A strong need exists to identify and model the conditions of resonance between nutation motion and liquid modes and to understand the general characteristics of the liquid motion that causes the problem in spinning spacecraft. A 3-D computerized model of the fuel slosh that accounts for any resonant modes found in the experimental testing will allow for increased accuracy in the overall modeling process. Development of a more accurate model of the fuel slosh currently lies in a more generalized 3-D computerized model incorporating masses, springs and dampers. Parameters describing the model include the inertia tensor of the fuel, spring constants, and damper coefficients. Refinement and understanding the effects of these parameters allow for a more accurate simulation of fuel slosh. The current research will focus on developing models of different complexity and estimating the model parameters that will ultimately provide a more realistic prediction of Nutation Time Constant obtained through simulation.

  20. Estimation of parameter sensitivities for stochastic reaction networks

    KAUST Repository

    Gupta, Ankit

    2016-01-07

    Quantification of the effects of parameter uncertainty is an important and challenging problem in Systems Biology. We consider this problem in the context of stochastic models of biochemical reaction networks where the dynamics is described as a continuous-time Markov chain whose states represent the molecular counts of various species. For such models, effects of parameter uncertainty are often quantified by estimating the infinitesimal sensitivities of some observables with respect to model parameters. The aim of this talk is to present a holistic approach towards this problem of estimating parameter sensitivities for stochastic reaction networks. Our approach is based on a generic formula which allows us to construct efficient estimators for parameter sensitivity using simulations of the underlying model. We will discuss how novel simulation techniques, such as tau-leaping approximations, multi-level methods etc. can be easily integrated with our approach and how one can deal with stiff reaction networks where reactions span multiple time-scales. We will demonstrate the efficiency and applicability of our approach using many examples from the biological literature.

  1. CosmoSIS: A System for MC Parameter Estimation

    Energy Technology Data Exchange (ETDEWEB)

    Zuntz, Joe [Manchester U.; Paterno, Marc [Fermilab; Jennings, Elise [Chicago U., EFI; Rudd, Douglas [U. Chicago; Manzotti, Alessandro [Chicago U., Astron. Astrophys. Ctr.; Dodelson, Scott [Chicago U., Astron. Astrophys. Ctr.; Bridle, Sarah [Manchester U.; Sehrish, Saba [Fermilab; Kowalkowski, James [Fermilab

    2015-01-01

    Cosmological parameter estimation is entering a new era. Large collaborations need to coordinate high-stakes analyses using multiple methods; furthermore such analyses have grown in complexity due to sophisticated models of cosmology and systematic uncertainties. In this paper we argue that modularity is the key to addressing these challenges: calculations should be broken up into interchangeable modular units with inputs and outputs clearly defined. We present a new framework for cosmological parameter estimation, CosmoSIS, designed to connect together, share, and advance development of inference tools across the community. We describe the modules already available in Cosmo- SIS, including camb, Planck, cosmic shear calculations, and a suite of samplers. We illustrate it using demonstration code that you can run out-of-the-box with the installer available at http://bitbucket.org/joezuntz/cosmosis.

  2. System health monitoring using multiple-model adaptive estimation techniques

    Science.gov (United States)

    Sifford, Stanley Ryan

    Monitoring system health for fault detection and diagnosis by tracking system parameters concurrently with state estimates is approached using a new multiple-model adaptive estimation (MMAE) method. This novel method is called GRid-based Adaptive Parameter Estimation (GRAPE). GRAPE expands existing MMAE methods by using new techniques to sample the parameter space. GRAPE expands on MMAE with the hypothesis that sample models can be applied and resampled without relying on a predefined set of models. GRAPE is initially implemented in a linear framework using Kalman filter models. A more generalized GRAPE formulation is presented using extended Kalman filter (EKF) models to represent nonlinear systems. GRAPE can handle both time invariant and time varying systems as it is designed to track parameter changes. Two techniques are presented to generate parameter samples for the parallel filter models. The first approach is called selected grid-based stratification (SGBS). SGBS divides the parameter space into equally spaced strata. The second approach uses Latin Hypercube Sampling (LHS) to determine the parameter locations and minimize the total number of required models. LHS is particularly useful when the parameter dimensions grow. Adding more parameters does not require the model count to increase for LHS. Each resample is independent of the prior sample set other than the location of the parameter estimate. SGBS and LHS can be used for both the initial sample and subsequent resamples. Furthermore, resamples are not required to use the same technique. Both techniques are demonstrated for both linear and nonlinear frameworks. The GRAPE framework further formalizes the parameter tracking process through a general approach for nonlinear systems. These additional methods allow GRAPE to either narrow the focus to converged values within a parameter range or expand the range in the appropriate direction to track the parameters outside the current parameter range boundary

  3. Estimating scaled treatment effects with multiple outcomes.

    Science.gov (United States)

    Kennedy, Edward H; Kangovi, Shreya; Mitra, Nandita

    2017-01-01

    In classical study designs, the aim is often to learn about the effects of a treatment or intervention on a single outcome; in many modern studies, however, data on multiple outcomes are collected and it is of interest to explore effects on multiple outcomes simultaneously. Such designs can be particularly useful in patient-centered research, where different outcomes might be more or less important to different patients. In this paper, we propose scaled effect measures (via potential outcomes) that translate effects on multiple outcomes to a common scale, using mean-variance and median-interquartile range based standardizations. We present efficient, nonparametric, doubly robust methods for estimating these scaled effects (and weighted average summary measures), and for testing the null hypothesis that treatment affects all outcomes equally. We also discuss methods for exploring how treatment effects depend on covariates (i.e., effect modification). In addition to describing efficiency theory for our estimands and the asymptotic behavior of our estimators, we illustrate the methods in a simulation study and a data analysis. Importantly, and in contrast to much of the literature concerning effects on multiple outcomes, our methods are nonparametric and can be used not only in randomized trials to yield increased efficiency, but also in observational studies with high-dimensional covariates to reduce confounding bias.

  4. Parameter estimation in fractional diffusion models

    CERN Document Server

    Kubilius, Kęstutis; Ralchenko, Kostiantyn

    2017-01-01

    This book is devoted to parameter estimation in diffusion models involving fractional Brownian motion and related processes. For many years now, standard Brownian motion has been (and still remains) a popular model of randomness used to investigate processes in the natural sciences, financial markets, and the economy. The substantial limitation in the use of stochastic diffusion models with Brownian motion is due to the fact that the motion has independent increments, and, therefore, the random noise it generates is “white,” i.e., uncorrelated. However, many processes in the natural sciences, computer networks and financial markets have long-term or short-term dependences, i.e., the correlations of random noise in these processes are non-zero, and slowly or rapidly decrease with time. In particular, models of financial markets demonstrate various kinds of memory and usually this memory is modeled by fractional Brownian diffusion. Therefore, the book constructs diffusion models with memory and provides s...

  5. Pollen parameters estimates of genetic variability among newly ...

    African Journals Online (AJOL)

    Pollen parameters estimates of genetic variability among newly selected Nigerian roselle (Hibiscus sabdariffa L.) genotypes. ... Estimates of some pollen parameters where used to assess the genetic diversity among ... HOW TO USE AJOL.

  6. Estimation of light transport parameters in biological media using ...

    Indian Academy of Sciences (India)

    Estimation of light transport parameters in biological media using coherent backscattering ... backscattered light for estimating the light transport parameters of biological media has been investigated. ... Pramana – Journal of Physics | News.

  7. Determining Sample Size for Accurate Estimation of the Squared Multiple Correlation Coefficient.

    Science.gov (United States)

    Algina, James; Olejnik, Stephen

    2000-01-01

    Discusses determining sample size for estimation of the squared multiple correlation coefficient and presents regression equations that permit determination of the sample size for estimating this parameter for up to 20 predictor variables. (SLD)

  8. A Modified Penalty Parameter Approach for Optimal Estimation of UH with Simultaneous Estimation of Infiltration Parameters

    Science.gov (United States)

    Bhattacharjya, Rajib Kumar

    2018-05-01

    The unit hydrograph and the infiltration parameters of a watershed can be obtained from observed rainfall-runoff data by using inverse optimization technique. This is a two-stage optimization problem. In the first stage, the infiltration parameters are obtained and the unit hydrograph ordinates are estimated in the second stage. In order to combine this two-stage method into a single stage one, a modified penalty parameter approach is proposed for converting the constrained optimization problem to an unconstrained one. The proposed approach is designed in such a way that the model initially obtains the infiltration parameters and then searches the optimal unit hydrograph ordinates. The optimization model is solved using Genetic Algorithms. A reduction factor is used in the penalty parameter approach so that the obtained optimal infiltration parameters are not destroyed during subsequent generation of genetic algorithms, required for searching optimal unit hydrograph ordinates. The performance of the proposed methodology is evaluated by using two example problems. The evaluation shows that the model is superior, simple in concept and also has the potential for field application.

  9. A test for the parameters of multiple linear regression models ...

    African Journals Online (AJOL)

    A test for the parameters of multiple linear regression models is developed for conducting tests simultaneously on all the parameters of multiple linear regression models. The test is robust relative to the assumptions of homogeneity of variances and absence of serial correlation of the classical F-test. Under certain null and ...

  10. Statistical methods of parameter estimation for deterministically chaotic time series

    Science.gov (United States)

    Pisarenko, V. F.; Sornette, D.

    2004-03-01

    We discuss the possibility of applying some standard statistical methods (the least-square method, the maximum likelihood method, and the method of statistical moments for estimation of parameters) to deterministically chaotic low-dimensional dynamic system (the logistic map) containing an observational noise. A “segmentation fitting” maximum likelihood (ML) method is suggested to estimate the structural parameter of the logistic map along with the initial value x1 considered as an additional unknown parameter. The segmentation fitting method, called “piece-wise” ML, is similar in spirit but simpler and has smaller bias than the “multiple shooting” previously proposed. Comparisons with different previously proposed techniques on simulated numerical examples give favorable results (at least, for the investigated combinations of sample size N and noise level). Besides, unlike some suggested techniques, our method does not require the a priori knowledge of the noise variance. We also clarify the nature of the inherent difficulties in the statistical analysis of deterministically chaotic time series and the status of previously proposed Bayesian approaches. We note the trade off between the need of using a large number of data points in the ML analysis to decrease the bias (to guarantee consistency of the estimation) and the unstable nature of dynamical trajectories with exponentially fast loss of memory of the initial condition. The method of statistical moments for the estimation of the parameter of the logistic map is discussed. This method seems to be the unique method whose consistency for deterministically chaotic time series is proved so far theoretically (not only numerically).

  11. Application of spreadsheet to estimate infiltration parameters

    OpenAIRE

    Zakwan, Mohammad; Muzzammil, Mohammad; Alam, Javed

    2016-01-01

    Infiltration is the process of flow of water into the ground through the soil surface. Soil water although contributes a negligible fraction of total water present on earth surface, but is of utmost importance for plant life. Estimation of infiltration rates is of paramount importance for estimation of effective rainfall, groundwater recharge, and designing of irrigation systems. Numerous infiltration models are in use for estimation of infiltration rates. The conventional graphical approach ...

  12. Averaging models: parameters estimation with the R-Average procedure

    Directory of Open Access Journals (Sweden)

    S. Noventa

    2010-01-01

    Full Text Available The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982, can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto & Vicentini, 2007 can be used to estimate the parameters of these models. By the use of multiple information criteria in the model selection procedure, R-Average allows for the identification of the best subset of parameters that account for the data. After a review of the general method, we present an implementation of the procedure in the framework of R-project, followed by some experiments using a Monte Carlo method.

  13. Probability of detection as a function of multiple influencing parameters

    Energy Technology Data Exchange (ETDEWEB)

    Pavlovic, Mato

    2014-10-15

    Non-destructive testing is subject to measurement uncertainties. In safety critical applications the reliability assessment of its capability to detect flaws is therefore necessary. In most applications, the flaw size is the single most important parameter that influences the probability of detection (POD) of the flaw. That is why the POD is typically calculated and expressed as a function of the flaw size. The capability of the inspection system to detect flaws is established by comparing the size of reliably detected flaw with the size of the flaw that is critical for the structural integrity. Applications where several factors have an important influence on the POD are investigated in this dissertation. To devise a reliable estimation of the NDT system capability it is necessary to express the POD as a function of all these factors. A multi-parameter POD model is developed. It enables POD to be calculated and expressed as a function of several influencing parameters. The model was tested on the data from the ultrasonic inspection of copper and cast iron components with artificial flaws. Also, a technique to spatially present POD data called the volume POD is developed. The fusion of the POD data coming from multiple inspections of the same component with different sensors is performed to reach the overall POD of the inspection system.

  14. Probability of detection as a function of multiple influencing parameters

    International Nuclear Information System (INIS)

    Pavlovic, Mato

    2014-01-01

    Non-destructive testing is subject to measurement uncertainties. In safety critical applications the reliability assessment of its capability to detect flaws is therefore necessary. In most applications, the flaw size is the single most important parameter that influences the probability of detection (POD) of the flaw. That is why the POD is typically calculated and expressed as a function of the flaw size. The capability of the inspection system to detect flaws is established by comparing the size of reliably detected flaw with the size of the flaw that is critical for the structural integrity. Applications where several factors have an important influence on the POD are investigated in this dissertation. To devise a reliable estimation of the NDT system capability it is necessary to express the POD as a function of all these factors. A multi-parameter POD model is developed. It enables POD to be calculated and expressed as a function of several influencing parameters. The model was tested on the data from the ultrasonic inspection of copper and cast iron components with artificial flaws. Also, a technique to spatially present POD data called the volume POD is developed. The fusion of the POD data coming from multiple inspections of the same component with different sensors is performed to reach the overall POD of the inspection system.

  15. Estimates for the parameters of the heavy quark expansion

    Energy Technology Data Exchange (ETDEWEB)

    Heinonen, Johannes; Mannel, Thomas [Universitaet Siegen (Germany)

    2015-07-01

    We give improved estimates for the non-perturbative parameters appearing in the heavy quark expansion for inclusive decays. While the parameters appearing in low orders of this expansion can be extracted from data, the number of parameters in higher orders proliferates strongly, making a determination of these parameters from data impossible. Thus, one has to rely on theoretical estimates which may be obtained from an insertion of intermediate states. We refine this method and attempt to estimate the uncertainties of this approach.

  16. Low Complexity Parameter Estimation For Off-the-Grid Targets

    KAUST Repository

    Jardak, Seifallah; Ahmed, Sajid; Alouini, Mohamed-Slim

    2015-01-01

    In multiple-input multiple-output radar, to estimate the reflection coefficient, spatial location, and Doppler shift of a target, a derived cost function is usually evaluated and optimized over a grid of points. The performance of such algorithms

  17. Colocated MIMO Radar: Beamforming, Waveform design, and Target Parameter Estimation

    KAUST Repository

    Jardak, Seifallah

    2014-04-01

    Thanks to its improved capabilities, the Multiple Input Multiple Output (MIMO) radar is attracting the attention of researchers and practitioners alike. Because it transmits orthogonal or partially correlated waveforms, this emerging technology outperformed the phased array radar by providing better parametric identifiability, achieving higher spatial resolution, and designing complex beampatterns. To avoid jamming and enhance the signal to noise ratio, it is often interesting to maximize the transmitted power in a given region of interest and minimize it elsewhere. This problem is known as the transmit beampattern design and is usually tackled as a two-step process: a transmit covariance matrix is firstly designed by minimizing a convex optimization problem, which is then used to generate practical waveforms. In this work, we propose simple novel methods to generate correlated waveforms using finite alphabet constant and non-constant-envelope symbols. To generate finite alphabet waveforms, the proposed method maps easily generated Gaussian random variables onto the phase-shift-keying, pulse-amplitude, and quadrature-amplitude modulation schemes. For such mapping, the probability density function of Gaussian random variables is divided into M regions, where M is the number of alphabets in the corresponding modulation scheme. By exploiting the mapping function, the relationship between the cross-correlation of Gaussian and finite alphabet symbols is derived. The second part of this thesis covers the topic of target parameter estimation. To determine the reflection coefficient, spatial location, and Doppler shift of a target, maximum likelihood estimation yields the best performance. However, it requires a two dimensional search problem. Therefore, its computational complexity is prohibitively high. So, we proposed a reduced complexity and optimum performance algorithm which allows the two dimensional fast Fourier transform to jointly estimate the spatial location

  18. Multi-objective optimization in quantum parameter estimation

    Science.gov (United States)

    Gong, BeiLi; Cui, Wei

    2018-04-01

    We investigate quantum parameter estimation based on linear and Kerr-type nonlinear controls in an open quantum system, and consider the dissipation rate as an unknown parameter. We show that while the precision of parameter estimation is improved, it usually introduces a significant deformation to the system state. Moreover, we propose a multi-objective model to optimize the two conflicting objectives: (1) maximizing the Fisher information, improving the parameter estimation precision, and (2) minimizing the deformation of the system state, which maintains its fidelity. Finally, simulations of a simplified ɛ-constrained model demonstrate the feasibility of the Hamiltonian control in improving the precision of the quantum parameter estimation.

  19. Estimation of Poisson-Dirichlet Parameters with Monotone Missing Data

    Directory of Open Access Journals (Sweden)

    Xueqin Zhou

    2017-01-01

    Full Text Available This article considers the estimation of the unknown numerical parameters and the density of the base measure in a Poisson-Dirichlet process prior with grouped monotone missing data. The numerical parameters are estimated by the method of maximum likelihood estimates and the density function is estimated by kernel method. A set of simulations was conducted, which shows that the estimates perform well.

  20. Parameter estimation and testing of hypotheses

    International Nuclear Information System (INIS)

    Fruhwirth, R.

    1996-01-01

    This lecture presents the basic mathematical ideas underlying the concept of random variable and the construction and analysis of estimators and test statistics. The material presented is based mainly on four books given in the references: the general exposition of estimators and test statistics follows Kendall and Stuart which is a comprehensive review of the field; the book by Eadie et al. contains selecting topics of particular interest to experimental physicist and a host of illuminating examples from experimental high-energy physics; for the presentation of numerical procedures, the Press et al. and the Thisted books have been used. The last section deals with estimation in dynamic systems. In most books the Kalman filter is presented in a Bayesian framework, often obscured by cumbrous notation. In this lecture, the link to classical least-squares estimators and regression models is stressed with the aim of facilitating the access to this less familiar topic. References are given for specific applications to track and vertex fitting and for extended exposition of these topics. In the appendix, the link between Bayesian decision rules and feed-forward neural networks is presented. (J.S.). 10 refs., 5 figs., 1 appendix

  1. Parameter estimation in tree graph metabolic networks

    NARCIS (Netherlands)

    Astola, Laura; Stigter, Hans; Gomez Roldan, Maria Victoria; Eeuwijk, van Fred; Hall, Robert D.; Groenenboom, Marian; Molenaar, Jaap J.

    2016-01-01

    We study the glycosylation processes that convert initially toxic substrates to nu- tritionally valuable metabolites in the flavonoid biosynthesis pathway of tomato (Solanum lycopersicum) seedlings. To estimate the reaction rates we use ordinary differential equations (ODEs) to model the enzyme

  2. A Comparative Study of Distribution System Parameter Estimation Methods

    Energy Technology Data Exchange (ETDEWEB)

    Sun, Yannan; Williams, Tess L.; Gourisetti, Sri Nikhil Gup

    2016-07-17

    In this paper, we compare two parameter estimation methods for distribution systems: residual sensitivity analysis and state-vector augmentation with a Kalman filter. These two methods were originally proposed for transmission systems, and are still the most commonly used methods for parameter estimation. Distribution systems have much lower measurement redundancy than transmission systems. Therefore, estimating parameters is much more difficult. To increase the robustness of parameter estimation, the two methods are applied with combined measurement snapshots (measurement sets taken at different points in time), so that the redundancy for computing the parameter values is increased. The advantages and disadvantages of both methods are discussed. The results of this paper show that state-vector augmentation is a better approach for parameter estimation in distribution systems. Simulation studies are done on a modified version of IEEE 13-Node Test Feeder with varying levels of measurement noise and non-zero error in the other system model parameters.

  3. Neglect Of Parameter Estimation Uncertainty Can Significantly Overestimate Structural Reliability

    Directory of Open Access Journals (Sweden)

    Rózsás Árpád

    2015-12-01

    Full Text Available Parameter estimation uncertainty is often neglected in reliability studies, i.e. point estimates of distribution parameters are used for representative fractiles, and in probabilistic models. A numerical example examines the effect of this uncertainty on structural reliability using Bayesian statistics. The study reveals that the neglect of parameter estimation uncertainty might lead to an order of magnitude underestimation of failure probability.

  4. Estimation of delays and other parameters in nonlinear functional differential equations

    Science.gov (United States)

    Banks, H. T.; Lamm, P. K. D.

    1983-01-01

    A spline-based approximation scheme for nonlinear nonautonomous delay differential equations is discussed. Convergence results (using dissipative type estimates on the underlying nonlinear operators) are given in the context of parameter estimation problems which include estimation of multiple delays and initial data as well as the usual coefficient-type parameters. A brief summary of some of the related numerical findings is also given.

  5. minimum variance estimation of yield parameters of rubber tree

    African Journals Online (AJOL)

    2013-03-01

    Mar 1, 2013 ... It is our opinion that Kalman filter is a robust estimator of the ... Kalman filter, parameter estimation, rubber clones, Chow failure test, autocorrelation, STAMP, data ...... Mills, T.C. Modelling Current Temperature Trends.

  6. Estimation of a collision impact parameter

    International Nuclear Information System (INIS)

    Shmatov, S.V.; Zarubin, P.I.

    2001-01-01

    We demonstrate that the nuclear collision geometry (i.e. impact parameter) can be determined in an event-by-event analysis by measuring the transverse energy flow in the pseudorapidity region 3≤|η|≤5 with a minimal dependence on collision dynamics details at the LHC energy scale. Using the HIJING model we have illustrated our calculation by a simulation of events of nucleus-nucleus interactions at the c.m.s. energy from 1 up to 5.5 TeV per nucleon and various types of nuclei

  7. Novel Method for 5G Systems NLOS Channels Parameter Estimation

    Directory of Open Access Journals (Sweden)

    Vladeta Milenkovic

    2017-01-01

    Full Text Available For the development of new 5G systems to operate in mm bands, there is a need for accurate radio propagation modelling at these bands. In this paper novel approach for NLOS channels parameter estimation will be presented. Estimation will be performed based on LCR performance measure, which will enable us to estimate propagation parameters in real time and to avoid weaknesses of ML and moment method estimation approaches.

  8. Multiphase flow parameter estimation based on laser scattering

    Science.gov (United States)

    Vendruscolo, Tiago P.; Fischer, Robert; Martelli, Cicero; Rodrigues, Rômulo L. P.; Morales, Rigoberto E. M.; da Silva, Marco J.

    2015-07-01

    The flow of multiple constituents inside a pipe or vessel, known as multiphase flow, is commonly found in many industry branches. The measurement of the individual flow rates in such flow is still a challenge, which usually requires a combination of several sensor types. However, in many applications, especially in industrial process control, it is not necessary to know the absolute flow rate of the respective phases, but rather to continuously monitor flow conditions in order to quickly detect deviations from the desired parameters. Here we show how a simple and low-cost sensor design can achieve this, by using machine-learning techniques to distinguishing the characteristic patterns of oblique laser light scattered at the phase interfaces. The sensor is capable of estimating individual phase fluxes (as well as their changes) in multiphase flows and may be applied to safety applications due to its quick response time.

  9. Multiphase flow parameter estimation based on laser scattering

    International Nuclear Information System (INIS)

    Vendruscolo, Tiago P; Fischer, Robert; Martelli, Cicero; Da Silva, Marco J; Rodrigues, Rômulo L P; Morales, Rigoberto E M

    2015-01-01

    The flow of multiple constituents inside a pipe or vessel, known as multiphase flow, is commonly found in many industry branches. The measurement of the individual flow rates in such flow is still a challenge, which usually requires a combination of several sensor types. However, in many applications, especially in industrial process control, it is not necessary to know the absolute flow rate of the respective phases, but rather to continuously monitor flow conditions in order to quickly detect deviations from the desired parameters. Here we show how a simple and low-cost sensor design can achieve this, by using machine-learning techniques to distinguishing the characteristic patterns of oblique laser light scattered at the phase interfaces. The sensor is capable of estimating individual phase fluxes (as well as their changes) in multiphase flows and may be applied to safety applications due to its quick response time. (paper)

  10. Parameter Estimation for Improving Association Indicators in Binary Logistic Regression

    Directory of Open Access Journals (Sweden)

    Mahdi Bashiri

    2012-02-01

    Full Text Available The aim of this paper is estimation of Binary logistic regression parameters for maximizing the log-likelihood function with improved association indicators. In this paper the parameter estimation steps have been explained and then measures of association have been introduced and their calculations have been analyzed. Moreover a new related indicators based on membership degree level have been expressed. Indeed association measures demonstrate the number of success responses occurred in front of failure in certain number of Bernoulli independent experiments. In parameter estimation, existing indicators values is not sensitive to the parameter values, whereas the proposed indicators are sensitive to the estimated parameters during the iterative procedure. Therefore, proposing a new association indicator of binary logistic regression with more sensitivity to the estimated parameters in maximizing the log- likelihood in iterative procedure is innovation of this study.

  11. Estimation of gloss from rough surface parameters

    Science.gov (United States)

    Simonsen, Ingve; Larsen, Åge G.; Andreassen, Erik; Ommundsen, Espen; Nord-Varhaug, Katrin

    2005-12-01

    Gloss is a quantity used in the optical industry to quantify and categorize materials according to how well they scatter light specularly. With the aid of phase perturbation theory, we derive an approximate expression for this quantity for a one-dimensional randomly rough surface. It is demonstrated that gloss depends in an exponential way on two dimensionless quantities that are associated with the surface randomness: the root-mean-square roughness times the perpendicular momentum transfer for the specular direction, and a correlation function dependent factor times a lateral momentum variable associated with the collection angle. Rigorous Monte Carlo simulations are used to access the quality of this approximation, and good agreement is observed over large regions of parameter space.

  12. Estimation of Parameters of CCF with Staggered Testing

    International Nuclear Information System (INIS)

    Kim, Myung-Ki; Hong, Sung-Yull

    2006-01-01

    Common cause failures are extremely important in reliability analysis and would be dominant to risk contributor in a high reliable system such as a nuclear power plant. Of particular concern is common cause failure (CCF) that degrades redundancy or diversity implemented to improve a reliability of systems. Most of analyses of parameters of CCF models such as beta factor model, alpha factor model, and MGL(Multiple Greek Letters) model deal a system with a nonstaggered testing strategy. Non-staggered testing is that all components are tested at the same time (or at least the same shift) and staggered testing is that if there is a failure in the first component, all the other components are tested immediately, and if it succeeds, no more is done until the next scheduled testing time. Both of them are applied in the nuclear power plants. The strategy, however, is not explicitly described in the technical specifications, but implicitly in the periodic test procedure. For example, some redundant components particularly important to safety are being tested with staggered testing strategy. Others are being performed with non-staggered testing strategy. This paper presents the parameter estimator of CCF model such as beta factor model, MGL model, and alpha factor model with staggered testing strategy. In addition, a new CCF model, rho factor model, is proposed and its parameter is presented with staggered testing strategy

  13. A new Bayesian recursive technique for parameter estimation

    Science.gov (United States)

    Kaheil, Yasir H.; Gill, M. Kashif; McKee, Mac; Bastidas, Luis

    2006-08-01

    The performance of any model depends on how well its associated parameters are estimated. In the current application, a localized Bayesian recursive estimation (LOBARE) approach is devised for parameter estimation. The LOBARE methodology is an extension of the Bayesian recursive estimation (BARE) method. It is applied in this paper on two different types of models: an artificial intelligence (AI) model in the form of a support vector machine (SVM) application for forecasting soil moisture and a conceptual rainfall-runoff (CRR) model represented by the Sacramento soil moisture accounting (SAC-SMA) model. Support vector machines, based on statistical learning theory (SLT), represent the modeling task as a quadratic optimization problem and have already been used in various applications in hydrology. They require estimation of three parameters. SAC-SMA is a very well known model that estimates runoff. It has a 13-dimensional parameter space. In the LOBARE approach presented here, Bayesian inference is used in an iterative fashion to estimate the parameter space that will most likely enclose a best parameter set. This is done by narrowing the sampling space through updating the "parent" bounds based on their fitness. These bounds are actually the parameter sets that were selected by BARE runs on subspaces of the initial parameter space. The new approach results in faster convergence toward the optimal parameter set using minimum training/calibration data and fewer sets of parameter values. The efficacy of the localized methodology is also compared with the previously used BARE algorithm.

  14. Control and Estimation of Distributed Parameter Systems

    CERN Document Server

    Kappel, F; Kunisch, K

    1998-01-01

    Consisting of 23 refereed contributions, this volume offers a broad and diverse view of current research in control and estimation of partial differential equations. Topics addressed include, but are not limited to - control and stability of hyperbolic systems related to elasticity, linear and nonlinear; - control and identification of nonlinear parabolic systems; - exact and approximate controllability, and observability; - Pontryagin's maximum principle and dynamic programming in PDE; and - numerics pertinent to optimal and suboptimal control problems. This volume is primarily geared toward control theorists seeking information on the latest developments in their area of expertise. It may also serve as a stimulating reader to any researcher who wants to gain an impression of activities at the forefront of a vigorously expanding area in applied mathematics.

  15. Gravity Field Parameter Estimation Using QR Factorization

    Science.gov (United States)

    Klokocnik, J.; Wagner, C. A.; McAdoo, D.; Kostelecky, J.; Bezdek, A.; Novak, P.; Gruber, C.; Marty, J.; Bruinsma, S. L.; Gratton, S.; Balmino, G.; Baboulin, M.

    2007-12-01

    This study compares the accuracy of the estimated geopotential coefficients when QR factorization is used instead of the classical method applied at our institute, namely the generation of normal equations that are solved by means of Cholesky decomposition. The objective is to evaluate the gain in numerical precision, which is obtained at considerable extra cost in terms of computer resources. Therefore, a significant increase in precision must be realized in order to justify the additional cost. Numerical simulations were done in order to examine the performance of both solution methods. Reference gravity gradients were simulated, using the EIGEN-GL04C gravity field model to degree and order 300, every 3 seconds along a near-circular, polar orbit at 250 km altitude. The simulation spanned a total of 60 days. A polar orbit was selected in this simulation in order to avoid the 'polar gap' problem, which causes inaccurate estimation of the low-order spherical harmonic coefficients. Regularization is required in that case (e.g., the GOCE mission), which is not the subject of the present study. The simulated gravity gradients, to which white noise was added, were then processed with the GINS software package, applying EIGEN-CG03 as the background gravity field model, followed either by the usual normal equation computation or using the QR approach for incremental linear least squares. The accuracy assessment of the gravity field recovery consists in computing the median error degree-variance spectra, accumulated geoid errors, geoid errors due to individual coefficients, and geoid errors calculated on a global grid. The performance, in terms of memory usage, required disk space, and CPU time, of the QR versus the normal equation approach is also evaluated.

  16. Online State Space Model Parameter Estimation in Synchronous Machines

    Directory of Open Access Journals (Sweden)

    Z. Gallehdari

    2014-06-01

    The suggested approach is evaluated for a sample synchronous machine model. Estimated parameters are tested for different inputs at different operating conditions. The effect of noise is also considered in this study. Simulation results show that the proposed approach provides good accuracy for parameter estimation.

  17. Parameter Estimates in Differential Equation Models for Chemical Kinetics

    Science.gov (United States)

    Winkel, Brian

    2011-01-01

    We discuss the need for devoting time in differential equations courses to modelling and the completion of the modelling process with efforts to estimate the parameters in the models using data. We estimate the parameters present in several differential equation models of chemical reactions of order n, where n = 0, 1, 2, and apply more general…

  18. Estimation of ground water hydraulic parameters

    Energy Technology Data Exchange (ETDEWEB)

    Hvilshoej, Soeren

    1998-11-01

    The main objective was to assess field methods to determine ground water hydraulic parameters and to develop and apply new analysis methods to selected field techniques. A field site in Vejen, Denmark, which previously has been intensively investigated on the basis of a large amount of mini slug tests and tracer tests, was chosen for experimental application and evaluation. Particular interest was in analysing partially penetrating pumping tests and a recently proposed single-well dipole test. Three wells were constructed in which partially penetrating pumping tests and multi-level single-well dipole tests were performed. In addition, multi-level slug tests, flow meter tests, gamma-logs, and geologic characterisation of soil samples were carried out. In addition to the three Vejen analyses, data from previously published partially penetrating pumping tests were analysed assuming homogeneous anisotropic aquifer conditions. In the present study methods were developed to analyse partially penetrating pumping tests and multi-level single-well dipole tests based on an inverse numerical model. The obtained horizontal hydraulic conductivities from the partially penetrating pumping tests were in accordance with measurements obtained from multi-level slug tests and mini slug tests. Accordance was also achieved between the anisotropy ratios determined from partially penetrating pumping tests and multi-level single-well dipole tests. It was demonstrated that the partially penetrating pumping test analysed by and inverse numerical model is a very valuable technique that may provide hydraulic information on the storage terms and the vertical distribution of the horizontal and vertical hydraulic conductivity under both confined and unconfined aquifer conditions. (EG) 138 refs.

  19. Bayesian Parameter Estimation for Heavy-Duty Vehicles

    Energy Technology Data Exchange (ETDEWEB)

    Miller, Eric; Konan, Arnaud; Duran, Adam

    2017-03-28

    Accurate vehicle parameters are valuable for design, modeling, and reporting. Estimating vehicle parameters can be a very time-consuming process requiring tightly-controlled experimentation. This work describes a method to estimate vehicle parameters such as mass, coefficient of drag/frontal area, and rolling resistance using data logged during standard vehicle operation. The method uses Monte Carlo to generate parameter sets which is fed to a variant of the road load equation. Modeled road load is then compared to measured load to evaluate the probability of the parameter set. Acceptance of a proposed parameter set is determined using the probability ratio to the current state, so that the chain history will give a distribution of parameter sets. Compared to a single value, a distribution of possible values provides information on the quality of estimates and the range of possible parameter values. The method is demonstrated by estimating dynamometer parameters. Results confirm the method's ability to estimate reasonable parameter sets, and indicates an opportunity to increase the certainty of estimates through careful selection or generation of the test drive cycle.

  20. Parameter and State Estimator for State Space Models

    Directory of Open Access Journals (Sweden)

    Ruifeng Ding

    2014-01-01

    Full Text Available This paper proposes a parameter and state estimator for canonical state space systems from measured input-output data. The key is to solve the system state from the state equation and to substitute it into the output equation, eliminating the state variables, and the resulting equation contains only the system inputs and outputs, and to derive a least squares parameter identification algorithm. Furthermore, the system states are computed from the estimated parameters and the input-output data. Convergence analysis using the martingale convergence theorem indicates that the parameter estimates converge to their true values. Finally, an illustrative example is provided to show that the proposed algorithm is effective.

  1. Parameter estimation and prediction of nonlinear biological systems: some examples

    NARCIS (Netherlands)

    Doeswijk, T.G.; Keesman, K.J.

    2006-01-01

    Rearranging and reparameterizing a discrete-time nonlinear model with polynomial quotient structure in input, output and parameters (xk = f(Z, p)) leads to a model linear in its (new) parameters. As a result, the parameter estimation problem becomes a so-called errors-in-variables problem for which

  2. A Novel Nonlinear Parameter Estimation Method of Soft Tissues

    Directory of Open Access Journals (Sweden)

    Qianqian Tong

    2017-12-01

    Full Text Available The elastic parameters of soft tissues are important for medical diagnosis and virtual surgery simulation. In this study, we propose a novel nonlinear parameter estimation method for soft tissues. Firstly, an in-house data acquisition platform was used to obtain external forces and their corresponding deformation values. To provide highly precise data for estimating nonlinear parameters, the measured forces were corrected using the constructed weighted combination forecasting model based on a support vector machine (WCFM_SVM. Secondly, a tetrahedral finite element parameter estimation model was established to describe the physical characteristics of soft tissues, using the substitution parameters of Young’s modulus and Poisson’s ratio to avoid solving complicated nonlinear problems. To improve the robustness of our model and avoid poor local minima, the initial parameters solved by a linear finite element model were introduced into the parameter estimation model. Finally, a self-adapting Levenberg–Marquardt (LM algorithm was presented, which is capable of adaptively adjusting iterative parameters to solve the established parameter estimation model. The maximum absolute error of our WCFM_SVM model was less than 0.03 Newton, resulting in more accurate forces in comparison with other correction models tested. The maximum absolute error between the calculated and measured nodal displacements was less than 1.5 mm, demonstrating that our nonlinear parameters are precise.

  3. Automatic smoothing parameter selection in GAMLSS with an application to centile estimation.

    Science.gov (United States)

    Rigby, Robert A; Stasinopoulos, Dimitrios M

    2014-08-01

    A method for automatic selection of the smoothing parameters in a generalised additive model for location, scale and shape (GAMLSS) model is introduced. The method uses a P-spline representation of the smoothing terms to express them as random effect terms with an internal (or local) maximum likelihood estimation on the predictor scale of each distribution parameter to estimate its smoothing parameters. This provides a fast method for estimating multiple smoothing parameters. The method is applied to centile estimation where all four parameters of a distribution for the response variable are modelled as smooth functions of a transformed explanatory variable x This allows smooth modelling of the location, scale, skewness and kurtosis parameters of the response variable distribution as functions of x. © The Author(s) 2013 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  4. Robust Parameter and Signal Estimation in Induction Motors

    DEFF Research Database (Denmark)

    Børsting, H.

    This thesis deals with theories and methods for robust parameter and signal estimation in induction motors. The project originates in industrial interests concerning sensor-less control of electrical drives. During the work, some general problems concerning estimation of signals and parameters...... in nonlinear systems, have been exposed. The main objectives of this project are: - analysis and application of theories and methods for robust estimation of parameters in a model structure, obtained from knowledge of the physics of the induction motor. - analysis and application of theories and methods...... for robust estimation of the rotor speed and driving torque of the induction motor based only on measurements of stator voltages and currents. Only contimuous-time models have been used, which means that physical related signals and parameters are estimated directly and not indirectly by some discrete...

  5. Modeling and Parameter Estimation of a Small Wind Generation System

    Directory of Open Access Journals (Sweden)

    Carlos A. Ramírez Gómez

    2013-11-01

    Full Text Available The modeling and parameter estimation of a small wind generation system is presented in this paper. The system consists of a wind turbine, a permanent magnet synchronous generator, a three phase rectifier, and a direct current load. In order to estimate the parameters wind speed data are registered in a weather station located in the Fraternidad Campus at ITM. Wind speed data were applied to a reference model programed with PSIM software. From that simulation, variables were registered to estimate the parameters. The wind generation system model together with the estimated parameters is an excellent representation of the detailed model, but the estimated model offers a higher flexibility than the programed model in PSIM software.

  6. A simulation of water pollution model parameter estimation

    Science.gov (United States)

    Kibler, J. F.

    1976-01-01

    A parameter estimation procedure for a water pollution transport model is elaborated. A two-dimensional instantaneous-release shear-diffusion model serves as representative of a simple transport process. Pollution concentration levels are arrived at via modeling of a remote-sensing system. The remote-sensed data are simulated by adding Gaussian noise to the concentration level values generated via the transport model. Model parameters are estimated from the simulated data using a least-squares batch processor. Resolution, sensor array size, and number and location of sensor readings can be found from the accuracies of the parameter estimates.

  7. How to fool cosmic microwave background parameter estimation

    International Nuclear Information System (INIS)

    Kinney, William H.

    2001-01-01

    With the release of the data from the Boomerang and MAXIMA-1 balloon flights, estimates of cosmological parameters based on the cosmic microwave background (CMB) have reached unprecedented precision. In this paper I show that it is possible for these estimates to be substantially biased by features in the primordial density power spectrum. I construct primordial power spectra which mimic to within cosmic variance errors the effect of changing parameters such as the baryon density and neutrino mass, meaning that even an ideal measurement would be unable to resolve the degeneracy. Complementary measurements are necessary to resolve this ambiguity in parameter estimation efforts based on CMB temperature fluctuations alone

  8. State Estimation-based Transmission line parameter identification

    Directory of Open Access Journals (Sweden)

    Fredy Andrés Olarte Dussán

    2010-01-01

    Full Text Available This article presents two state-estimation-based algorithms for identifying transmission line parameters. The identification technique used simultaneous state-parameter estimation on an artificial power system composed of several copies of the same transmission line, using measurements at different points in time. The first algorithm used active and reactive power measurements at both ends of the line. The second method used synchronised phasor voltage and current measurements at both ends. The algorithms were tested in simulated conditions on the 30-node IEEE test system. All line parameters for this system were estimated with errors below 1%.

  9. Using linear time-invariant system theory to estimate kinetic parameters directly from projection measurements

    International Nuclear Information System (INIS)

    Zeng, G.L.; Gullberg, G.T.

    1995-01-01

    It is common practice to estimate kinetic parameters from dynamically acquired tomographic data by first reconstructing a dynamic sequence of three-dimensional reconstructions and then fitting the parameters to time activity curves generated from the time-varying reconstructed images. However, in SPECT, the pharmaceutical distribution can change during the acquisition of a complete tomographic data set, which can bias the estimated kinetic parameters. It is hypothesized that more accurate estimates of the kinetic parameters can be obtained by fitting to the projection measurements instead of the reconstructed time sequence. Estimation from projections requires the knowledge of their relationship between the tissue regions of interest or voxels with particular kinetic parameters and the project measurements, which results in a complicated nonlinear estimation problem with a series of exponential factors with multiplicative coefficients. A technique is presented in this paper where the exponential decay parameters are estimated separately using linear time-invariant system theory. Once the exponential factors are known, the coefficients of the exponentials can be estimated using linear estimation techniques. Computer simulations demonstrate that estimation of the kinetic parameters directly from the projections is more accurate than the estimation from the reconstructed images

  10. A variational approach to parameter estimation in ordinary differential equations

    Directory of Open Access Journals (Sweden)

    Kaschek Daniel

    2012-08-01

    Full Text Available Abstract Background Ordinary differential equations are widely-used in the field of systems biology and chemical engineering to model chemical reaction networks. Numerous techniques have been developed to estimate parameters like rate constants, initial conditions or steady state concentrations from time-resolved data. In contrast to this countable set of parameters, the estimation of entire courses of network components corresponds to an innumerable set of parameters. Results The approach presented in this work is able to deal with course estimation for extrinsic system inputs or intrinsic reactants, both not being constrained by the reaction network itself. Our method is based on variational calculus which is carried out analytically to derive an augmented system of differential equations including the unconstrained components as ordinary state variables. Finally, conventional parameter estimation is applied to the augmented system resulting in a combined estimation of courses and parameters. Conclusions The combined estimation approach takes the uncertainty in input courses correctly into account. This leads to precise parameter estimates and correct confidence intervals. In particular this implies that small motifs of large reaction networks can be analysed independently of the rest. By the use of variational methods, elements from control theory and statistics are combined allowing for future transfer of methods between the two fields.

  11. Estimating Soil Hydraulic Parameters using Gradient Based Approach

    Science.gov (United States)

    Rai, P. K.; Tripathi, S.

    2017-12-01

    The conventional way of estimating parameters of a differential equation is to minimize the error between the observations and their estimates. The estimates are produced from forward solution (numerical or analytical) of differential equation assuming a set of parameters. Parameter estimation using the conventional approach requires high computational cost, setting-up of initial and boundary conditions, and formation of difference equations in case the forward solution is obtained numerically. Gaussian process based approaches like Gaussian Process Ordinary Differential Equation (GPODE) and Adaptive Gradient Matching (AGM) have been developed to estimate the parameters of Ordinary Differential Equations without explicitly solving them. Claims have been made that these approaches can straightforwardly be extended to Partial Differential Equations; however, it has been never demonstrated. This study extends AGM approach to PDEs and applies it for estimating parameters of Richards equation. Unlike the conventional approach, the AGM approach does not require setting-up of initial and boundary conditions explicitly, which is often difficult in real world application of Richards equation. The developed methodology was applied to synthetic soil moisture data. It was seen that the proposed methodology can estimate the soil hydraulic parameters correctly and can be a potential alternative to the conventional method.

  12. A variational approach to parameter estimation in ordinary differential equations.

    Science.gov (United States)

    Kaschek, Daniel; Timmer, Jens

    2012-08-14

    Ordinary differential equations are widely-used in the field of systems biology and chemical engineering to model chemical reaction networks. Numerous techniques have been developed to estimate parameters like rate constants, initial conditions or steady state concentrations from time-resolved data. In contrast to this countable set of parameters, the estimation of entire courses of network components corresponds to an innumerable set of parameters. The approach presented in this work is able to deal with course estimation for extrinsic system inputs or intrinsic reactants, both not being constrained by the reaction network itself. Our method is based on variational calculus which is carried out analytically to derive an augmented system of differential equations including the unconstrained components as ordinary state variables. Finally, conventional parameter estimation is applied to the augmented system resulting in a combined estimation of courses and parameters. The combined estimation approach takes the uncertainty in input courses correctly into account. This leads to precise parameter estimates and correct confidence intervals. In particular this implies that small motifs of large reaction networks can be analysed independently of the rest. By the use of variational methods, elements from control theory and statistics are combined allowing for future transfer of methods between the two fields.

  13. Kinetic parameter estimation from attenuated SPECT projection measurements

    International Nuclear Information System (INIS)

    Reutter, B.W.; Gullberg, G.T.

    1998-01-01

    Conventional analysis of dynamically acquired nuclear medicine data involves fitting kinetic models to time-activity curves generated from regions of interest defined on a temporal sequence of reconstructed images. However, images reconstructed from the inconsistent projections of a time-varying distribution of radiopharmaceutical acquired by a rotating SPECT system can contain artifacts that lead to biases in the estimated kinetic parameters. To overcome this problem the authors investigated the estimation of kinetic parameters directly from projection data by modeling the data acquisition process. To accomplish this it was necessary to parametrize the spatial and temporal distribution of the radiopharmaceutical within the SPECT field of view. In a simulated transverse slice, kinetic parameters were estimated for simple one compartment models for three myocardial regions of interest, as well as for the liver. Myocardial uptake and washout parameters estimated by conventional analysis of noiseless simulated data had biases ranging between 1--63%. Parameters estimated directly from the noiseless projection data were unbiased as expected, since the model used for fitting was faithful to the simulation. Predicted uncertainties (standard deviations) of the parameters obtained for 500,000 detected events ranged between 2--31% for the myocardial uptake parameters and 2--23% for the myocardial washout parameters

  14. Models for estimating photosynthesis parameters from in situ production profiles

    Science.gov (United States)

    Kovač, Žarko; Platt, Trevor; Sathyendranath, Shubha; Antunović, Suzana

    2017-12-01

    The rate of carbon assimilation in phytoplankton primary production models is mathematically prescribed with photosynthesis irradiance functions, which convert a light flux (energy) into a material flux (carbon). Information on this rate is contained in photosynthesis parameters: the initial slope and the assimilation number. The exactness of parameter values is crucial for precise calculation of primary production. Here we use a model of the daily production profile based on a suite of photosynthesis irradiance functions and extract photosynthesis parameters from in situ measured daily production profiles at the Hawaii Ocean Time-series station Aloha. For each function we recover parameter values, establish parameter distributions and quantify model skill. We observe that the choice of the photosynthesis irradiance function to estimate the photosynthesis parameters affects the magnitudes of parameter values as recovered from in situ profiles. We also tackle the problem of parameter exchange amongst the models and the effect it has on model performance. All models displayed little or no bias prior to parameter exchange, but significant bias following parameter exchange. The best model performance resulted from using optimal parameter values. Model formulation was extended further by accounting for spectral effects and deriving a spectral analytical solution for the daily production profile. The daily production profile was also formulated with time dependent growing biomass governed by a growth equation. The work on parameter recovery was further extended by exploring how to extract photosynthesis parameters from information on watercolumn production. It was demonstrated how to estimate parameter values based on a linearization of the full analytical solution for normalized watercolumn production and from the solution itself, without linearization. The paper complements previous works on photosynthesis irradiance models by analysing the skill and consistency of

  15. Periodic orbits of hybrid systems and parameter estimation via AD

    International Nuclear Information System (INIS)

    Guckenheimer, John; Phipps, Eric Todd; Casey, Richard

    2004-01-01

    Rhythmic, periodic processes are ubiquitous in biological systems; for example, the heart beat, walking, circadian rhythms and the menstrual cycle. Modeling these processes with high fidelity as periodic orbits of dynamical systems is challenging because: (1) (most) nonlinear differential equations can only be solved numerically; (2) accurate computation requires solving boundary value problems; (3) many problems and solutions are only piecewise smooth; (4) many problems require solving differential-algebraic equations; (5) sensitivity information for parameter dependence of solutions requires solving variational equations; and (6) truncation errors in numerical integration degrade performance of optimization methods for parameter estimation. In addition, mathematical models of biological processes frequently contain many poorly-known parameters, and the problems associated with this impedes the construction of detailed, high-fidelity models. Modelers are often faced with the difficult problem of using simulations of a nonlinear model, with complex dynamics and many parameters, to match experimental data. Improved computational tools for exploring parameter space and fitting models to data are clearly needed. This paper describes techniques for computing periodic orbits in systems of hybrid differential-algebraic equations and parameter estimation methods for fitting these orbits to data. These techniques make extensive use of automatic differentiation to accurately and efficiently evaluate derivatives for time integration, parameter sensitivities, root finding and optimization. The boundary value problem representing a periodic orbit in a hybrid system of differential algebraic equations is discretized via multiple-shooting using a high-degree Taylor series integration method (GM00, Phi03). Numerical solutions to the shooting equations are then estimated by a Newton process yielding an approximate periodic orbit. A metric is defined for computing the distance

  16. REML estimates of genetic parameters of sexual dimorphism for ...

    Indian Academy of Sciences (India)

    Administrator

    Full and half sibs were distinguished, in contrast to usual isofemale studies in which animals ... studies. Thus, the aim of this study was to estimate genetic parameters of sexual dimorphism in isofemale lines using ..... Muscovy ducks. Genet.

  17. The Problems of Multiple Feedback Estimation.

    Science.gov (United States)

    Bulcock, Jeffrey W.

    The use of two-stage least squares (2SLS) for the estimation of feedback linkages is inappropriate for nonorthogonal data sets because 2SLS is extremely sensitive to multicollinearity. It is argued that what is needed is use of a different estimating criterion than the least squares criterion. Theoretically the variance normalization criterion has…

  18. A distributed approach for parameters estimation in System Biology models

    International Nuclear Information System (INIS)

    Mosca, E.; Merelli, I.; Alfieri, R.; Milanesi, L.

    2009-01-01

    Due to the lack of experimental measurements, biological variability and experimental errors, the value of many parameters of the systems biology mathematical models is yet unknown or uncertain. A possible computational solution is the parameter estimation, that is the identification of the parameter values that determine the best model fitting respect to experimental data. We have developed an environment to distribute each run of the parameter estimation algorithm on a different computational resource. The key feature of the implementation is a relational database that allows the user to swap the candidate solutions among the working nodes during the computations. The comparison of the distributed implementation with the parallel one showed that the presented approach enables a faster and better parameter estimation of systems biology models.

  19. Kinetic parameter estimation from SPECT cone-beam projection measurements

    International Nuclear Information System (INIS)

    Huesman, Ronald H.; Reutter, Bryan W.; Zeng, G. Larry; Gullberg, Grant T.

    1998-01-01

    Kinetic parameters are commonly estimated from dynamically acquired nuclear medicine data by first reconstructing a dynamic sequence of images and subsequently fitting the parameters to time-activity curves generated from regions of interest overlaid upon the image sequence. Biased estimates can result from images reconstructed using inconsistent projections of a time-varying distribution of radiopharmaceutical acquired by a rotating SPECT system. If the SPECT data are acquired using cone-beam collimators wherein the gantry rotates so that the focal point of the collimators always remains in a plane, additional biases can arise from images reconstructed using insufficient, as well as truncated, projection samples. To overcome these problems we have investigated the estimation of kinetic parameters directly from SPECT cone-beam projection data by modelling the data acquisition process. To accomplish this it was necessary to parametrize the spatial and temporal distribution of the radiopharmaceutical within the SPECT field of view. In a simulated chest image volume, kinetic parameters were estimated for simple one-compartment models for four myocardial regions of interest. Myocardial uptake and washout parameters estimated by conventional analysis of noiseless simulated cone-beam data had biases ranging between 3-26% and 0-28%, respectively. Parameters estimated directly from the noiseless projection data were unbiased as expected, since the model used for fitting was faithful to the simulation. Statistical uncertainties of parameter estimates for 10 000 000 events ranged between 0.2-9% for the uptake parameters and between 0.3-6% for the washout parameters. (author)

  20. Kalman filter data assimilation: targeting observations and parameter estimation.

    Science.gov (United States)

    Bellsky, Thomas; Kostelich, Eric J; Mahalov, Alex

    2014-06-01

    This paper studies the effect of targeted observations on state and parameter estimates determined with Kalman filter data assimilation (DA) techniques. We first provide an analytical result demonstrating that targeting observations within the Kalman filter for a linear model can significantly reduce state estimation error as opposed to fixed or randomly located observations. We next conduct observing system simulation experiments for a chaotic model of meteorological interest, where we demonstrate that the local ensemble transform Kalman filter (LETKF) with targeted observations based on largest ensemble variance is skillful in providing more accurate state estimates than the LETKF with randomly located observations. Additionally, we find that a hybrid ensemble Kalman filter parameter estimation method accurately updates model parameters within the targeted observation context to further improve state estimation.

  1. Kalman filter data assimilation: Targeting observations and parameter estimation

    International Nuclear Information System (INIS)

    Bellsky, Thomas; Kostelich, Eric J.; Mahalov, Alex

    2014-01-01

    This paper studies the effect of targeted observations on state and parameter estimates determined with Kalman filter data assimilation (DA) techniques. We first provide an analytical result demonstrating that targeting observations within the Kalman filter for a linear model can significantly reduce state estimation error as opposed to fixed or randomly located observations. We next conduct observing system simulation experiments for a chaotic model of meteorological interest, where we demonstrate that the local ensemble transform Kalman filter (LETKF) with targeted observations based on largest ensemble variance is skillful in providing more accurate state estimates than the LETKF with randomly located observations. Additionally, we find that a hybrid ensemble Kalman filter parameter estimation method accurately updates model parameters within the targeted observation context to further improve state estimation

  2. Kalman filter estimation of RLC parameters for UMP transmission line

    Directory of Open Access Journals (Sweden)

    Mohd Amin Siti Nur Aishah

    2018-01-01

    Full Text Available This paper present the development of Kalman filter that allows evaluation in the estimation of resistance (R, inductance (L, and capacitance (C values for Universiti Malaysia Pahang (UMP short transmission line. To overcome the weaknesses of existing system such as power losses in the transmission line, Kalman Filter can be a better solution to estimate the parameters. The aim of this paper is to estimate RLC values by using Kalman filter that in the end can increase the system efficiency in UMP. In this research, matlab simulink model is developed to analyse the UMP short transmission line by considering different noise conditions to reprint certain unknown parameters which are difficult to predict. The data is then used for comparison purposes between calculated and estimated values. The results have illustrated that the Kalman Filter estimate accurately the RLC parameters with less error. The comparison of accuracy between Kalman Filter and Least Square method is also presented to evaluate their performances.

  3. Accelerated maximum likelihood parameter estimation for stochastic biochemical systems

    Directory of Open Access Journals (Sweden)

    Daigle Bernie J

    2012-05-01

    Full Text Available Abstract Background A prerequisite for the mechanistic simulation of a biochemical system is detailed knowledge of its kinetic parameters. Despite recent experimental advances, the estimation of unknown parameter values from observed data is still a bottleneck for obtaining accurate simulation results. Many methods exist for parameter estimation in deterministic biochemical systems; methods for discrete stochastic systems are less well developed. Given the probabilistic nature of stochastic biochemical models, a natural approach is to choose parameter values that maximize the probability of the observed data with respect to the unknown parameters, a.k.a. the maximum likelihood parameter estimates (MLEs. MLE computation for all but the simplest models requires the simulation of many system trajectories that are consistent with experimental data. For models with unknown parameters, this presents a computational challenge, as the generation of consistent trajectories can be an extremely rare occurrence. Results We have developed Monte Carlo Expectation-Maximization with Modified Cross-Entropy Method (MCEM2: an accelerated method for calculating MLEs that combines advances in rare event simulation with a computationally efficient version of the Monte Carlo expectation-maximization (MCEM algorithm. Our method requires no prior knowledge regarding parameter values, and it automatically provides a multivariate parameter uncertainty estimate. We applied the method to five stochastic systems of increasing complexity, progressing from an analytically tractable pure-birth model to a computationally demanding model of yeast-polarization. Our results demonstrate that MCEM2 substantially accelerates MLE computation on all tested models when compared to a stand-alone version of MCEM. Additionally, we show how our method identifies parameter values for certain classes of models more accurately than two recently proposed computationally efficient methods

  4. SBML-PET-MPI: a parallel parameter estimation tool for Systems Biology Markup Language based models.

    Science.gov (United States)

    Zi, Zhike

    2011-04-01

    Parameter estimation is crucial for the modeling and dynamic analysis of biological systems. However, implementing parameter estimation is time consuming and computationally demanding. Here, we introduced a parallel parameter estimation tool for Systems Biology Markup Language (SBML)-based models (SBML-PET-MPI). SBML-PET-MPI allows the user to perform parameter estimation and parameter uncertainty analysis by collectively fitting multiple experimental datasets. The tool is developed and parallelized using the message passing interface (MPI) protocol, which provides good scalability with the number of processors. SBML-PET-MPI is freely available for non-commercial use at http://www.bioss.uni-freiburg.de/cms/sbml-pet-mpi.html or http://sites.google.com/site/sbmlpetmpi/.

  5. State and parameter estimation in biotechnical batch reactors

    NARCIS (Netherlands)

    Keesman, K.J.

    2000-01-01

    In this paper the problem of state and parameter estimation in biotechnical batch reactors is considered. Models describing the biotechnical process behaviour are usually nonlinear with time-varying parameters. Hence, the resulting large dimensions of the augmented state vector, roughly > 7, in

  6. On the Nature of SEM Estimates of ARMA Parameters.

    Science.gov (United States)

    Hamaker, Ellen L.; Dolan, Conor V.; Molenaar, Peter C. M.

    2002-01-01

    Reexamined the nature of structural equation modeling (SEM) estimates of autoregressive moving average (ARMA) models, replicated the simulation experiments of P. Molenaar, and examined the behavior of the log-likelihood ratio test. Simulation studies indicate that estimates of ARMA parameters observed with SEM software are identical to those…

  7. On robust parameter estimation in brain-computer interfacing

    Science.gov (United States)

    Samek, Wojciech; Nakajima, Shinichi; Kawanabe, Motoaki; Müller, Klaus-Robert

    2017-12-01

    Objective. The reliable estimation of parameters such as mean or covariance matrix from noisy and high-dimensional observations is a prerequisite for successful application of signal processing and machine learning algorithms in brain-computer interfacing (BCI). This challenging task becomes significantly more difficult if the data set contains outliers, e.g. due to subject movements, eye blinks or loose electrodes, as they may heavily bias the estimation and the subsequent statistical analysis. Although various robust estimators have been developed to tackle the outlier problem, they ignore important structural information in the data and thus may not be optimal. Typical structural elements in BCI data are the trials consisting of a few hundred EEG samples and indicating the start and end of a task. Approach. This work discusses the parameter estimation problem in BCI and introduces a novel hierarchical view on robustness which naturally comprises different types of outlierness occurring in structured data. Furthermore, the class of minimum divergence estimators is reviewed and a robust mean and covariance estimator for structured data is derived and evaluated with simulations and on a benchmark data set. Main results. The results show that state-of-the-art BCI algorithms benefit from robustly estimated parameters. Significance. Since parameter estimation is an integral part of various machine learning algorithms, the presented techniques are applicable to many problems beyond BCI.

  8. Parameter Estimation for a Computable General Equilibrium Model

    DEFF Research Database (Denmark)

    Arndt, Channing; Robinson, Sherman; Tarp, Finn

    2002-01-01

    We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of non-linear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...

  9. Parameter Estimation for a Computable General Equilibrium Model

    DEFF Research Database (Denmark)

    Arndt, Channing; Robinson, Sherman; Tarp, Finn

    We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of nonlinear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...

  10. Estimation of genetic parameters for body weights of Kurdish sheep ...

    African Journals Online (AJOL)

    Genetic parameters and (co)variance components were estimated by restricted maximum likelihood (REML) procedure, using animal models of kind 1, 2, 3, 4, 5 and 6, for body weight in birth, three, six, nine and 12 months of age in a Kurdish sheep flock. Direct and maternal breeding values were estimated using the best ...

  11. Aircraft parameter estimation ± A tool for development of ...

    Indian Academy of Sciences (India)

    In addition, actuator performance and controller gains may be flight condition dependent. Moreover, this approach may result in open-loop parameter estimates with low accuracy. 6. Aerodynamic databases for high fidelity flight simulators. Estimation of a comprehensive aerodynamic model suitable for a flight simulator is an.

  12. Test models for improving filtering with model errors through stochastic parameter estimation

    International Nuclear Information System (INIS)

    Gershgorin, B.; Harlim, J.; Majda, A.J.

    2010-01-01

    The filtering skill for turbulent signals from nature is often limited by model errors created by utilizing an imperfect model for filtering. Updating the parameters in the imperfect model through stochastic parameter estimation is one way to increase filtering skill and model performance. Here a suite of stringent test models for filtering with stochastic parameter estimation is developed based on the Stochastic Parameterization Extended Kalman Filter (SPEKF). These new SPEKF-algorithms systematically correct both multiplicative and additive biases and involve exact formulas for propagating the mean and covariance including the parameters in the test model. A comprehensive study is presented of robust parameter regimes for increasing filtering skill through stochastic parameter estimation for turbulent signals as the observation time and observation noise are varied and even when the forcing is incorrectly specified. The results here provide useful guidelines for filtering turbulent signals in more complex systems with significant model errors.

  13. A Note On the Estimation of the Poisson Parameter

    Directory of Open Access Journals (Sweden)

    S. S. Chitgopekar

    1985-01-01

    distribution when there are errors in observing the zeros and ones and obtains both the maximum likelihood and moments estimates of the Poisson mean and the error probabilities. It is interesting to note that either method fails to give unique estimates of these parameters unless the error probabilities are functionally related. However, it is equally interesting to observe that the estimate of the Poisson mean does not depend on the functional relationship between the error probabilities.

  14. Adaptive distributed parameter and input estimation in linear parabolic PDEs

    KAUST Repository

    Mechhoud, Sarra

    2016-01-01

    First, new sufficient identifiability conditions of the input and the parameter simultaneous estimation are stated. Then, by means of Lyapunov-based design, an adaptive estimator is derived in the infinite-dimensional framework. It consists of a state observer and gradient-based parameter and input adaptation laws. The parameter convergence depends on the plant signal richness assumption, whereas the state convergence is established using a Lyapunov approach. The results of the paper are illustrated by simulation on tokamak plasma heat transport model using simulated data.

  15. Parameter Estimation of Damped Compound Pendulum Using Bat Algorithm

    Directory of Open Access Journals (Sweden)

    Saad Mohd Sazli

    2016-01-01

    Full Text Available In this study, the parameter identification of the damped compound pendulum system is proposed using one of the most promising nature inspired algorithms which is Bat Algorithm (BA. The procedure used to achieve the parameter identification of the experimental system consists of input-output data collection, ARX model order selection and parameter estimation using bat algorithm (BA method. PRBS signal is used as an input signal to regulate the motor speed. Whereas, the output signal is taken from position sensor. Both, input and output data is used to estimate the parameter of the autoregressive with exogenous input (ARX model. The performance of the model is validated using mean squares error (MSE between the actual and predicted output responses of the models. Finally, comparative study is conducted between BA and the conventional estimation method (i.e. Least Square. Based on the results obtained, MSE produce from Bat Algorithm (BA is outperformed the Least Square (LS method.

  16. Iterative methods for distributed parameter estimation in parabolic PDE

    Energy Technology Data Exchange (ETDEWEB)

    Vogel, C.R. [Montana State Univ., Bozeman, MT (United States); Wade, J.G. [Bowling Green State Univ., OH (United States)

    1994-12-31

    The goal of the work presented is the development of effective iterative techniques for large-scale inverse or parameter estimation problems. In this extended abstract, a detailed description of the mathematical framework in which the authors view these problem is presented, followed by an outline of the ideas and algorithms developed. Distributed parameter estimation problems often arise in mathematical modeling with partial differential equations. They can be viewed as inverse problems; the `forward problem` is that of using the fully specified model to predict the behavior of the system. The inverse or parameter estimation problem is: given the form of the model and some observed data from the system being modeled, determine the unknown parameters of the model. These problems are of great practical and mathematical interest, and the development of efficient computational algorithms is an active area of study.

  17. Method for Estimating the Parameters of LFM Radar Signal

    Directory of Open Access Journals (Sweden)

    Tan Chuan-Zhang

    2017-01-01

    Full Text Available In order to obtain reliable estimate of parameters, it is very important to protect the integrality of linear frequency modulation (LFM signal. Therefore, in the practical LFM radar signal processing, the length of data frame is often greater than the pulse width (PW of signal. In this condition, estimating the parameters by fractional Fourier transform (FrFT will cause the signal to noise ratio (SNR decrease. Aiming at this problem, we multiply the data frame by a Gaussian window to improve the SNR. Besides, for a further improvement of parameters estimation precision, a novel algorithm is derived via Lagrange interpolation polynomial, and we enhance the algorithm by a logarithmic transformation. Simulation results demonstrate that the derived algorithm significantly reduces the estimation errors of chirp-rate and initial frequency.

  18. Genetic parameter estimation of reproductive traits of Litopenaeus vannamei

    Science.gov (United States)

    Tan, Jian; Kong, Jie; Cao, Baoxiang; Luo, Kun; Liu, Ning; Meng, Xianhong; Xu, Shengyu; Guo, Zhaojia; Chen, Guoliang; Luan, Sheng

    2017-02-01

    In this study, the heritability, repeatability, phenotypic correlation, and genetic correlation of the reproductive and growth traits of L. vannamei were investigated and estimated. Eight traits of 385 shrimps from forty-two families, including the number of eggs (EN), number of nauplii (NN), egg diameter (ED), spawning frequency (SF), spawning success (SS), female body weight (BW) and body length (BL) at insemination, and condition factor (K), were measured,. A total of 519 spawning records including multiple spawning and 91 no spawning records were collected. The genetic parameters were estimated using an animal model, a multinomial logit model (for SF), and a sire-dam and probit model (for SS). Because there were repeated records, permanent environmental effects were included in the models. The heritability estimates for BW, BL, EN, NN, ED, SF, SS, and K were 0.49 ± 0.14, 0.51 ± 0.14, 0.12 ± 0.08, 0, 0.01 ± 0.04, 0.06 ± 0.06, 0.18 ± 0.07, and 0.10 ± 0.06, respectively. The genetic correlation was 0.99 ± 0.01 between BW and BL, 0.90 ± 0.19 between BW and EN, 0.22 ± 0.97 between BW and ED, -0.77 ± 1.14 between EN and ED, and -0.27 ± 0.36 between BW and K. The heritability of EN estimated without a covariate was 0.12 ± 0.08, and the genetic correlation was 0.90 ± 0.19 between BW and EN, indicating that improving BW may be used in selection programs to genetically improve the reproductive output of L. vannamei during the breeding. For EN, the data were also analyzed using body weight as a covariate (EN-2). The heritability of EN-2 was 0.03 ± 0.05, indicating that it is difficult to improve the reproductive output by genetic improvement. Furthermore, excessive pursuit of this selection is often at the expense of growth speed. Therefore, the selection of high-performance spawners using BW and SS may be an important strategy to improve nauplii production.

  19. Simple method for quick estimation of aquifer hydrogeological parameters

    Science.gov (United States)

    Ma, C.; Li, Y. Y.

    2017-08-01

    Development of simple and accurate methods to determine the aquifer hydrogeological parameters was of importance for groundwater resources assessment and management. Aiming at the present issue of estimating aquifer parameters based on some data of the unsteady pumping test, a fitting function of Theis well function was proposed using fitting optimization method and then a unitary linear regression equation was established. The aquifer parameters could be obtained by solving coefficients of the regression equation. The application of the proposed method was illustrated, using two published data sets. By the error statistics and analysis on the pumping drawdown, it showed that the method proposed in this paper yielded quick and accurate estimates of the aquifer parameters. The proposed method could reliably identify the aquifer parameters from long distance observed drawdowns and early drawdowns. It was hoped that the proposed method in this paper would be helpful for practicing hydrogeologists and hydrologists.

  20. A software for parameter estimation in dynamic models

    Directory of Open Access Journals (Sweden)

    M. Yuceer

    2008-12-01

    Full Text Available A common problem in dynamic systems is to determine parameters in an equation used to represent experimental data. The goal is to determine the values of model parameters that provide the best fit to measured data, generally based on some type of least squares or maximum likelihood criterion. In the most general case, this requires the solution of a nonlinear and frequently non-convex optimization problem. Some of the available software lack in generality, while others do not provide ease of use. A user-interactive parameter estimation software was needed for identifying kinetic parameters. In this work we developed an integration based optimization approach to provide a solution to such problems. For easy implementation of the technique, a parameter estimation software (PARES has been developed in MATLAB environment. When tested with extensive example problems from literature, the suggested approach is proven to provide good agreement between predicted and observed data within relatively less computing time and iterations.

  1. Joint Estimation of Multiple Precision Matrices with Common Structures.

    Science.gov (United States)

    Lee, Wonyul; Liu, Yufeng

    Estimation of inverse covariance matrices, known as precision matrices, is important in various areas of statistical analysis. In this article, we consider estimation of multiple precision matrices sharing some common structures. In this setting, estimating each precision matrix separately can be suboptimal as it ignores potential common structures. This article proposes a new approach to parameterize each precision matrix as a sum of common and unique components and estimate multiple precision matrices in a constrained l 1 minimization framework. We establish both estimation and selection consistency of the proposed estimator in the high dimensional setting. The proposed estimator achieves a faster convergence rate for the common structure in certain cases. Our numerical examples demonstrate that our new estimator can perform better than several existing methods in terms of the entropy loss and Frobenius loss. An application to a glioblastoma cancer data set reveals some interesting gene networks across multiple cancer subtypes.

  2. Traveltime approximations and parameter estimation for orthorhombic media

    KAUST Repository

    Masmoudi, Nabil

    2016-05-30

    Building anisotropy models is necessary for seismic modeling and imaging. However, anisotropy estimation is challenging due to the trade-off between inhomogeneity and anisotropy. Luckily, we can estimate the anisotropy parameters Building anisotropy models is necessary for seismic modeling and imaging. However, anisotropy estimation is challenging due to the trade-off between inhomogeneity and anisotropy. Luckily, we can estimate the anisotropy parameters if we relate them analytically to traveltimes. Using perturbation theory, we have developed traveltime approximations for orthorhombic media as explicit functions of the anellipticity parameters η1, η2, and Δχ in inhomogeneous background media. The parameter Δχ is related to Tsvankin-Thomsen notation and ensures easier computation of traveltimes in the background model. Specifically, our expansion assumes an inhomogeneous ellipsoidal anisotropic background model, which can be obtained from well information and stacking velocity analysis. We have used the Shanks transform to enhance the accuracy of the formulas. A homogeneous medium simplification of the traveltime expansion provided a nonhyperbolic moveout description of the traveltime that was more accurate than other derived approximations. Moreover, the formulation provides a computationally efficient tool to solve the eikonal equation of an orthorhombic medium, without any constraints on the background model complexity. Although, the expansion is based on the factorized representation of the perturbation parameters, smooth variations of these parameters (represented as effective values) provides reasonable results. Thus, this formulation provides a mechanism to estimate the three effective parameters η1, η2, and Δχ. We have derived Dix-type formulas for orthorhombic medium to convert the effective parameters to their interval values.

  3. Standard Errors of Estimated Latent Variable Scores with Estimated Structural Parameters

    Science.gov (United States)

    Hoshino, Takahiro; Shigemasu, Kazuo

    2008-01-01

    The authors propose a concise formula to evaluate the standard error of the estimated latent variable score when the true values of the structural parameters are not known and must be estimated. The formula can be applied to factor scores in factor analysis or ability parameters in item response theory, without bootstrap or Markov chain Monte…

  4. Assumptions of the primordial spectrum and cosmological parameter estimation

    International Nuclear Information System (INIS)

    Shafieloo, Arman; Souradeep, Tarun

    2011-01-01

    The observables of the perturbed universe, cosmic microwave background (CMB) anisotropy and large structures depend on a set of cosmological parameters, as well as the assumed nature of primordial perturbations. In particular, the shape of the primordial power spectrum (PPS) is, at best, a well-motivated assumption. It is known that the assumed functional form of the PPS in cosmological parameter estimation can affect the best-fit-parameters and their relative confidence limits. In this paper, we demonstrate that a specific assumed form actually drives the best-fit parameters into distinct basins of likelihood in the space of cosmological parameters where the likelihood resists improvement via modifications to the PPS. The regions where considerably better likelihoods are obtained allowing free-form PPS lie outside these basins. In the absence of a preferred model of inflation, this raises a concern that current cosmological parameter estimates are strongly prejudiced by the assumed form of PPS. Our results strongly motivate approaches toward simultaneous estimation of the cosmological parameters and the shape of the primordial spectrum from upcoming cosmological data. It is equally important for theorists to keep an open mind towards early universe scenarios that produce features in the PPS. (paper)

  5. Small sample GEE estimation of regression parameters for longitudinal data.

    Science.gov (United States)

    Paul, Sudhir; Zhang, Xuemao

    2014-09-28

    Longitudinal (clustered) response data arise in many bio-statistical applications which, in general, cannot be assumed to be independent. Generalized estimating equation (GEE) is a widely used method to estimate marginal regression parameters for correlated responses. The advantage of the GEE is that the estimates of the regression parameters are asymptotically unbiased even if the correlation structure is misspecified, although their small sample properties are not known. In this paper, two bias adjusted GEE estimators of the regression parameters in longitudinal data are obtained when the number of subjects is small. One is based on a bias correction, and the other is based on a bias reduction. Simulations show that the performances of both the bias-corrected methods are similar in terms of bias, efficiency, coverage probability, average coverage length, impact of misspecification of correlation structure, and impact of cluster size on bias correction. Both these methods show superior properties over the GEE estimates for small samples. Further, analysis of data involving a small number of subjects also shows improvement in bias, MSE, standard error, and length of the confidence interval of the estimates by the two bias adjusted methods over the GEE estimates. For small to moderate sample sizes (N ≤50), either of the bias-corrected methods GEEBc and GEEBr can be used. However, the method GEEBc should be preferred over GEEBr, as the former is computationally easier. For large sample sizes, the GEE method can be used. Copyright © 2014 John Wiley & Sons, Ltd.

  6. Adaptive distributed parameter and input estimation in linear parabolic PDEs

    KAUST Repository

    Mechhoud, Sarra

    2016-01-01

    In this paper, we discuss the on-line estimation of distributed source term, diffusion, and reaction coefficients of a linear parabolic partial differential equation using both distributed and interior-point measurements. First, new sufficient identifiability conditions of the input and the parameter simultaneous estimation are stated. Then, by means of Lyapunov-based design, an adaptive estimator is derived in the infinite-dimensional framework. It consists of a state observer and gradient-based parameter and input adaptation laws. The parameter convergence depends on the plant signal richness assumption, whereas the state convergence is established using a Lyapunov approach. The results of the paper are illustrated by simulation on tokamak plasma heat transport model using simulated data.

  7. Pattern statistics on Markov chains and sensitivity to parameter estimation

    Directory of Open Access Journals (Sweden)

    Nuel Grégory

    2006-10-01

    Full Text Available Abstract Background: In order to compute pattern statistics in computational biology a Markov model is commonly used to take into account the sequence composition. Usually its parameter must be estimated. The aim of this paper is to determine how sensitive these statistics are to parameter estimation, and what are the consequences of this variability on pattern studies (finding the most over-represented words in a genome, the most significant common words to a set of sequences,.... Results: In the particular case where pattern statistics (overlap counting only computed through binomial approximations we use the delta-method to give an explicit expression of σ, the standard deviation of a pattern statistic. This result is validated using simulations and a simple pattern study is also considered. Conclusion: We establish that the use of high order Markov model could easily lead to major mistakes due to the high sensitivity of pattern statistics to parameter estimation.

  8. Parameters Estimation of Geographically Weighted Ordinal Logistic Regression (GWOLR) Model

    Science.gov (United States)

    Zuhdi, Shaifudin; Retno Sari Saputro, Dewi; Widyaningsih, Purnami

    2017-06-01

    A regression model is the representation of relationship between independent variable and dependent variable. The dependent variable has categories used in the logistic regression model to calculate odds on. The logistic regression model for dependent variable has levels in the logistics regression model is ordinal. GWOLR model is an ordinal logistic regression model influenced the geographical location of the observation site. Parameters estimation in the model needed to determine the value of a population based on sample. The purpose of this research is to parameters estimation of GWOLR model using R software. Parameter estimation uses the data amount of dengue fever patients in Semarang City. Observation units used are 144 villages in Semarang City. The results of research get GWOLR model locally for each village and to know probability of number dengue fever patient categories.

  9. Parameter Estimation of Damped Compound Pendulum Differential Evolution Algorithm

    Directory of Open Access Journals (Sweden)

    Saad Mohd Sazli

    2016-01-01

    Full Text Available This paper present the parameter identification of damped compound pendulum using differential evolution algorithm. The procedure used to achieve the parameter identification of the experimental system consisted of input output data collection, ARX model order selection and parameter estimation using conventional method least square (LS and differential evolution (DE algorithm. PRBS signal is used to be input signal to regulate the motor speed. Whereas, the output signal is taken from position sensor. Both, input and output data is used to estimate the parameter of the ARX model. The residual error between the actual and predicted output responses of the models is validated using mean squares error (MSE. Analysis showed that, MSE value for LS is 0.0026 and MSE value for DE is 3.6601×10-5. Based results obtained, it was found that DE have lower MSE than the LS method.

  10. CTER—Rapid estimation of CTF parameters with error assessment

    Energy Technology Data Exchange (ETDEWEB)

    Penczek, Pawel A., E-mail: Pawel.A.Penczek@uth.tmc.edu [Department of Biochemistry and Molecular Biology, The University of Texas Medical School, 6431 Fannin MSB 6.220, Houston, TX 77054 (United States); Fang, Jia [Department of Biochemistry and Molecular Biology, The University of Texas Medical School, 6431 Fannin MSB 6.220, Houston, TX 77054 (United States); Li, Xueming; Cheng, Yifan [The Keck Advanced Microscopy Laboratory, Department of Biochemistry and Biophysics, University of California, San Francisco, CA 94158 (United States); Loerke, Justus; Spahn, Christian M.T. [Institut für Medizinische Physik und Biophysik, Charité – Universitätsmedizin Berlin, Charitéplatz 1, 10117 Berlin (Germany)

    2014-05-01

    In structural electron microscopy, the accurate estimation of the Contrast Transfer Function (CTF) parameters, particularly defocus and astigmatism, is of utmost importance for both initial evaluation of micrograph quality and for subsequent structure determination. Due to increases in the rate of data collection on modern microscopes equipped with new generation cameras, it is also important that the CTF estimation can be done rapidly and with minimal user intervention. Finally, in order to minimize the necessity for manual screening of the micrographs by a user it is necessary to provide an assessment of the errors of fitted parameters values. In this work we introduce CTER, a CTF parameters estimation method distinguished by its computational efficiency. The efficiency of the method makes it suitable for high-throughput EM data collection, and enables the use of a statistical resampling technique, bootstrap, that yields standard deviations of estimated defocus and astigmatism amplitude and angle, thus facilitating the automation of the process of screening out inferior micrograph data. Furthermore, CTER also outputs the spatial frequency limit imposed by reciprocal space aliasing of the discrete form of the CTF and the finite window size. We demonstrate the efficiency and accuracy of CTER using a data set collected on a 300 kV Tecnai Polara (FEI) using the K2 Summit DED camera in super-resolution counting mode. Using CTER we obtained a structure of the 80S ribosome whose large subunit had a resolution of 4.03 Å without, and 3.85 Å with, inclusion of astigmatism parameters. - Highlights: • We describe methodology for estimation of CTF parameters with error assessment. • Error estimates provide means for automated elimination of inferior micrographs. • High computational efficiency allows real-time monitoring of EM data quality. • Accurate CTF estimation yields structure of the 80S human ribosome at 3.85 Å.

  11. An approach of parameter estimation for non-synchronous systems

    International Nuclear Information System (INIS)

    Xu Daolin; Lu Fangfang

    2005-01-01

    Synchronization-based parameter estimation is simple and effective but only available to synchronous systems. To come over this limitation, we propose a technique that the parameters of an unknown physical process (possibly a non-synchronous system) can be identified from a time series via a minimization procedure based on a synchronization control. The feasibility of this approach is illustrated in several chaotic systems

  12. Parameter estimation in stochastic rainfall-runoff models

    DEFF Research Database (Denmark)

    Jonsdottir, Harpa; Madsen, Henrik; Palsson, Olafur Petur

    2006-01-01

    A parameter estimation method for stochastic rainfall-runoff models is presented. The model considered in the paper is a conceptual stochastic model, formulated in continuous-discrete state space form. The model is small and a fully automatic optimization is, therefore, possible for estimating all...... the parameter values are optimal for simulation or prediction. The data originates from Iceland and the model is designed for Icelandic conditions, including a snow routine for mountainous areas. The model demands only two input data series, precipitation and temperature and one output data series...

  13. Estimation of octanol/water partition coefficients using LSER parameters

    Science.gov (United States)

    Luehrs, Dean C.; Hickey, James P.; Godbole, Kalpana A.; Rogers, Tony N.

    1998-01-01

    The logarithms of octanol/water partition coefficients, logKow, were regressed against the linear solvation energy relationship (LSER) parameters for a training set of 981 diverse organic chemicals. The standard deviation for logKow was 0.49. The regression equation was then used to estimate logKow for a test of 146 chemicals which included pesticides and other diverse polyfunctional compounds. Thus the octanol/water partition coefficient may be estimated by LSER parameters without elaborate software but only moderate accuracy should be expected.

  14. Application of genetic algorithms for parameter estimation in liquid chromatography

    International Nuclear Information System (INIS)

    Hernandez Torres, Reynier; Irizar Mesa, Mirtha; Tavares Camara, Leoncio Diogenes

    2012-01-01

    In chromatography, complex inverse problems related to the parameters estimation and process optimization are presented. Metaheuristics methods are known as general purpose approximated algorithms which seek and hopefully find good solutions at a reasonable computational cost. These methods are iterative process to perform a robust search of a solution space. Genetic algorithms are optimization techniques based on the principles of genetics and natural selection. They have demonstrated very good performance as global optimizers in many types of applications, including inverse problems. In this work, the effectiveness of genetic algorithms is investigated to estimate parameters in liquid chromatography

  15. Bayesian estimation of parameters in a regional hydrological model

    Directory of Open Access Journals (Sweden)

    K. Engeland

    2002-01-01

    Full Text Available This study evaluates the applicability of the distributed, process-oriented Ecomag model for prediction of daily streamflow in ungauged basins. The Ecomag model is applied as a regional model to nine catchments in the NOPEX area, using Bayesian statistics to estimate the posterior distribution of the model parameters conditioned on the observed streamflow. The distribution is calculated by Markov Chain Monte Carlo (MCMC analysis. The Bayesian method requires formulation of a likelihood function for the parameters and three alternative formulations are used. The first is a subjectively chosen objective function that describes the goodness of fit between the simulated and observed streamflow, as defined in the GLUE framework. The second and third formulations are more statistically correct likelihood models that describe the simulation errors. The full statistical likelihood model describes the simulation errors as an AR(1 process, whereas the simple model excludes the auto-regressive part. The statistical parameters depend on the catchments and the hydrological processes and the statistical and the hydrological parameters are estimated simultaneously. The results show that the simple likelihood model gives the most robust parameter estimates. The simulation error may be explained to a large extent by the catchment characteristics and climatic conditions, so it is possible to transfer knowledge about them to ungauged catchments. The statistical models for the simulation errors indicate that structural errors in the model are more important than parameter uncertainties. Keywords: regional hydrological model, model uncertainty, Bayesian analysis, Markov Chain Monte Carlo analysis

  16. Targeted estimation of nuisance parameters to obtain valid statistical inference.

    Science.gov (United States)

    van der Laan, Mark J

    2014-01-01

    In order to obtain concrete results, we focus on estimation of the treatment specific mean, controlling for all measured baseline covariates, based on observing independent and identically distributed copies of a random variable consisting of baseline covariates, a subsequently assigned binary treatment, and a final outcome. The statistical model only assumes possible restrictions on the conditional distribution of treatment, given the covariates, the so-called propensity score. Estimators of the treatment specific mean involve estimation of the propensity score and/or estimation of the conditional mean of the outcome, given the treatment and covariates. In order to make these estimators asymptotically unbiased at any data distribution in the statistical model, it is essential to use data-adaptive estimators of these nuisance parameters such as ensemble learning, and specifically super-learning. Because such estimators involve optimal trade-off of bias and variance w.r.t. the infinite dimensional nuisance parameter itself, they result in a sub-optimal bias/variance trade-off for the resulting real-valued estimator of the estimand. We demonstrate that additional targeting of the estimators of these nuisance parameters guarantees that this bias for the estimand is second order and thereby allows us to prove theorems that establish asymptotic linearity of the estimator of the treatment specific mean under regularity conditions. These insights result in novel targeted minimum loss-based estimators (TMLEs) that use ensemble learning with additional targeted bias reduction to construct estimators of the nuisance parameters. In particular, we construct collaborative TMLEs (C-TMLEs) with known influence curve allowing for statistical inference, even though these C-TMLEs involve variable selection for the propensity score based on a criterion that measures how effective the resulting fit of the propensity score is in removing bias for the estimand. As a particular special

  17. Revisiting Boltzmann learning: parameter estimation in Markov random fields

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Andersen, Lars Nonboe; Kjems, Ulrik

    1996-01-01

    This article presents a generalization of the Boltzmann machine that allows us to use the learning rule for a much wider class of maximum likelihood and maximum a posteriori problems, including both supervised and unsupervised learning. Furthermore, the approach allows us to discuss regularization...... and generalization in the context of Boltzmann machines. We provide an illustrative example concerning parameter estimation in an inhomogeneous Markov field. The regularized adaptation produces a parameter set that closely resembles the “teacher” parameters, hence, will produce segmentations that closely reproduce...

  18. SCoPE: an efficient method of Cosmological Parameter Estimation

    International Nuclear Information System (INIS)

    Das, Santanu; Souradeep, Tarun

    2014-01-01

    Markov Chain Monte Carlo (MCMC) sampler is widely used for cosmological parameter estimation from CMB and other data. However, due to the intrinsic serial nature of the MCMC sampler, convergence is often very slow. Here we present a fast and independently written Monte Carlo method for cosmological parameter estimation named as Slick Cosmological Parameter Estimator (SCoPE), that employs delayed rejection to increase the acceptance rate of a chain, and pre-fetching that helps an individual chain to run on parallel CPUs. An inter-chain covariance update is also incorporated to prevent clustering of the chains allowing faster and better mixing of the chains. We use an adaptive method for covariance calculation to calculate and update the covariance automatically as the chains progress. Our analysis shows that the acceptance probability of each step in SCoPE is more than 95% and the convergence of the chains are faster. Using SCoPE, we carry out some cosmological parameter estimations with different cosmological models using WMAP-9 and Planck results. One of the current research interests in cosmology is quantifying the nature of dark energy. We analyze the cosmological parameters from two illustrative commonly used parameterisations of dark energy models. We also asses primordial helium fraction in the universe can be constrained by the present CMB data from WMAP-9 and Planck. The results from our MCMC analysis on the one hand helps us to understand the workability of the SCoPE better, on the other hand it provides a completely independent estimation of cosmological parameters from WMAP-9 and Planck data

  19. Estimation of Compaction Parameters Based on Soil Classification

    Science.gov (United States)

    Lubis, A. S.; Muis, Z. A.; Hastuty, I. P.; Siregar, I. M.

    2018-02-01

    Factors that must be considered in compaction of the soil works were the type of soil material, field control, maintenance and availability of funds. Those problems then raised the idea of how to estimate the density of the soil with a proper implementation system, fast, and economical. This study aims to estimate the compaction parameter i.e. the maximum dry unit weight (γ dmax) and optimum water content (Wopt) based on soil classification. Each of 30 samples were being tested for its properties index and compaction test. All of the data’s from the laboratory test results, were used to estimate the compaction parameter values by using linear regression and Goswami Model. From the research result, the soil types were A4, A-6, and A-7 according to AASHTO and SC, SC-SM, and CL based on USCS. By linear regression, the equation for estimation of the maximum dry unit weight (γdmax *)=1,862-0,005*FINES- 0,003*LL and estimation of the optimum water content (wopt *)=- 0,607+0,362*FINES+0,161*LL. By Goswami Model (with equation Y=mLogG+k), for estimation of the maximum dry unit weight (γdmax *) with m=-0,376 and k=2,482, for estimation of the optimum water content (wopt *) with m=21,265 and k=-32,421. For both of these equations a 95% confidence interval was obtained.

  20. Flexible carbon nanotube nanocomposite sensor for multiple physiological parameter monitoring

    KAUST Repository

    Nag, Anindya

    2016-10-16

    The paper presents the design, development, and fabrication of a flexible and wearable sensor based on carbon nanotube nanocomposite for monitoring specific physiological parameters. Polydimethylsiloxane (PDMS) was used as the substrate with a thin layer of a nanocomposite comprising functionalized multi-walled carbon nanotubes (MWCNTs) and PDMS as electrodes. The sensor patch functionalized on strain-sensitive capacitive sensing from interdigitated electrodes which were patterned with a laser on the nanocomposite layer. The thickness of the electrode layer was optimized regarding strain and conductivity. The sensor patch was connected to a monitoring device from one end and attached to the body on the other for examining purposes. Experimental results show the capability of the sensor patch used to detect respiration and limb movements. This work is a stepping stone of the sensing system to be developed for multiple physiological parameters.

  1. Flexible carbon nanotube nanocomposite sensor for multiple physiological parameter monitoring

    KAUST Repository

    Nag, Anindya; Mukhopadhyay, Subhas Chandra; Kosel, Jü rgen

    2016-01-01

    The paper presents the design, development, and fabrication of a flexible and wearable sensor based on carbon nanotube nanocomposite for monitoring specific physiological parameters. Polydimethylsiloxane (PDMS) was used as the substrate with a thin layer of a nanocomposite comprising functionalized multi-walled carbon nanotubes (MWCNTs) and PDMS as electrodes. The sensor patch functionalized on strain-sensitive capacitive sensing from interdigitated electrodes which were patterned with a laser on the nanocomposite layer. The thickness of the electrode layer was optimized regarding strain and conductivity. The sensor patch was connected to a monitoring device from one end and attached to the body on the other for examining purposes. Experimental results show the capability of the sensor patch used to detect respiration and limb movements. This work is a stepping stone of the sensing system to be developed for multiple physiological parameters.

  2. Estimation of object motion parameters from noisy images.

    Science.gov (United States)

    Broida, T J; Chellappa, R

    1986-01-01

    An approach is presented for the estimation of object motion parameters based on a sequence of noisy images. The problem considered is that of a rigid body undergoing unknown rotational and translational motion. The measurement data consists of a sequence of noisy image coordinates of two or more object correspondence points. By modeling the object dynamics as a function of time, estimates of the model parameters (including motion parameters) can be extracted from the data using recursive and/or batch techniques. This permits a desired degree of smoothing to be achieved through the use of an arbitrarily large number of images. Some assumptions regarding object structure are presently made. Results are presented for a recursive estimation procedure: the case considered here is that of a sequence of one dimensional images of a two dimensional object. Thus, the object moves in one transverse dimension, and in depth, preserving the fundamental ambiguity of the central projection image model (loss of depth information). An iterated extended Kalman filter is used for the recursive solution. Noise levels of 5-10 percent of the object image size are used. Approximate Cramer-Rao lower bounds are derived for the model parameter estimates as a function of object trajectory and noise level. This approach may be of use in situations where it is difficult to resolve large numbers of object match points, but relatively long sequences of images (10 to 20 or more) are available.

  3. Revised models and genetic parameter estimates for production and ...

    African Journals Online (AJOL)

    Genetic parameters for production and reproduction traits in the Elsenburg Dormer sheep stud were estimated using records of 11743 lambs born between 1943 and 2002. An animal model with direct and maternal additive, maternal permanent and temporary environmental effects was fitted for traits considered traits of the ...

  4. A Sparse Bayesian Learning Algorithm With Dictionary Parameter Estimation

    DEFF Research Database (Denmark)

    Hansen, Thomas Lundgaard; Badiu, Mihai Alin; Fleury, Bernard Henri

    2014-01-01

    This paper concerns sparse decomposition of a noisy signal into atoms which are specified by unknown continuous-valued parameters. An example could be estimation of the model order, frequencies and amplitudes of a superposition of complex sinusoids. The common approach is to reduce the continuous...

  5. Estimation of Physical Parameters in Linear and Nonlinear Dynamic Systems

    DEFF Research Database (Denmark)

    Knudsen, Morten

    variance and confidence ellipsoid is demonstrated. The relation is based on a new theorem on maxima of an ellipsoid. The procedure for input signal design and physical parameter estimation is tested on a number of examples, linear as well as nonlinear and simulated as well as real processes, and it appears...

  6. Parameter Estimates in Differential Equation Models for Population Growth

    Science.gov (United States)

    Winkel, Brian J.

    2011-01-01

    We estimate the parameters present in several differential equation models of population growth, specifically logistic growth models and two-species competition models. We discuss student-evolved strategies and offer "Mathematica" code for a gradient search approach. We use historical (1930s) data from microbial studies of the Russian biologist,…

  7. Parameter extraction and estimation based on the PV panel outdoor ...

    African Journals Online (AJOL)

    The experimental data obtained are validated and compared with the estimated results obtained through simulation based on the manufacture's data sheet. The simulation is based on the Newton-Raphson iterative method in MATLAB environment. This approach aids the computation of the PV module's parameters at any ...

  8. Synchronous Generator Model Parameter Estimation Based on Noisy Dynamic Waveforms

    Science.gov (United States)

    Berhausen, Sebastian; Paszek, Stefan

    2016-01-01

    In recent years, there have occurred system failures in many power systems all over the world. They have resulted in a lack of power supply to a large number of recipients. To minimize the risk of occurrence of power failures, it is necessary to perform multivariate investigations, including simulations, of power system operating conditions. To conduct reliable simulations, the current base of parameters of the models of generating units, containing the models of synchronous generators, is necessary. In the paper, there is presented a method for parameter estimation of a synchronous generator nonlinear model based on the analysis of selected transient waveforms caused by introducing a disturbance (in the form of a pseudorandom signal) in the generator voltage regulation channel. The parameter estimation was performed by minimizing the objective function defined as a mean square error for deviations between the measurement waveforms and the waveforms calculated based on the generator mathematical model. A hybrid algorithm was used for the minimization of the objective function. In the paper, there is described a filter system used for filtering the noisy measurement waveforms. The calculation results of the model of a 44 kW synchronous generator installed on a laboratory stand of the Institute of Electrical Engineering and Computer Science of the Silesian University of Technology are also given. The presented estimation method can be successfully applied to parameter estimation of different models of high-power synchronous generators operating in a power system.

  9. MPEG2 video parameter and no reference PSNR estimation

    DEFF Research Database (Denmark)

    Li, Huiying; Forchhammer, Søren

    2009-01-01

    MPEG coded video may be processed for quality assessment or postprocessed to reduce coding artifacts or transcoded. Utilizing information about the MPEG stream may be useful for these tasks. This paper deals with estimating MPEG parameter information from the decoded video stream without access t...

  10. NONLINEAR PLANT PIECEWISE-CONTINUOUS MODEL MATRIX PARAMETERS ESTIMATION

    Directory of Open Access Journals (Sweden)

    Roman L. Leibov

    2017-09-01

    Full Text Available This paper presents a nonlinear plant piecewise-continuous model matrix parameters estimation technique using nonlinear model time responses and random search method. One of piecewise-continuous model application areas is defined. The results of proposed approach application for aircraft turbofan engine piecewisecontinuous model formation are presented

  11. Estimates Of Genetic Parameters Of Body Weights Of Different ...

    African Journals Online (AJOL)

    four (44) farrowings were used to estimate the genetic parameters (heritability and repeatability) of body weight of pigs. Results obtained from the study showed that the heritability (h2) of birth and weaning weights were moderate (0.33±0.16 ...

  12. Estimation of stature from facial parameters in adult Abakaliki people ...

    African Journals Online (AJOL)

    This study is carried out in order to estimate the height of adult Igbo people of Abakaliki ethnic group in South-Eastern Nigeria from their facial Morphology. The parameters studied include Facial Length, Bizygomatic Diameter, Bigonial Diameter, Nasal Length, and Nasal Breadth. A total of 1000 subjects comprising 669 ...

  13. On Modal Parameter Estimates from Ambient Vibration Tests

    DEFF Research Database (Denmark)

    Agneni, A.; Brincker, Rune; Coppotelli, B.

    2004-01-01

    Modal parameter estimates from ambient vibration testing are turning into the preferred technique when one is interested in systems under actual loadings and operational conditions. Moreover, with this approach, expensive devices to excite the structure are not needed, since it can be adequately...

  14. Visco-piezo-elastic parameter estimation in laminated plate structures

    DEFF Research Database (Denmark)

    Araujo, A. L.; Mota Soares, C. M.; Herskovits, J.

    2009-01-01

    A parameter estimation technique is presented in this article, for identification of elastic, piezoelectric and viscoelastic properties of active laminated composite plates with surface-bonded piezoelectric patches. The inverse method presented uses experimental data in the form of a set of measu...

  15. Estimates of genetic parameters and genetic gains for growth traits ...

    African Journals Online (AJOL)

    Estimates of genetic parameters and genetic gains for growth traits of two Eucalyptus ... In South Africa, Eucalyptus urophylla is an important species due to its ... as hybrid parents to cross with E. grandis was 59.8% over the population mean.

  16. Estimation of riverbank soil erodibility parameters using genetic ...

    Indian Academy of Sciences (India)

    Tapas Karmaker

    2017-11-07

    Nov 7, 2017 ... process. Therefore, this is a study to verify the applicability of inverse parameter ... successful modelling of the riverbank erosion, precise estimation of ..... For this simulation, about 40 iterations are found to attain the convergence. ..... rithm for function optimization: a Matlab implementation. NCSU-IE TR ...

  17. estimation of shear strength parameters of lateritic soils using

    African Journals Online (AJOL)

    user

    ... a tool to estimate the. Nigerian Journal of Technology (NIJOTECH). Vol. ... modeling tools for the prediction of shear strength parameters for lateritic ... 2.2 Geotechnical Analysis of the Soils ... The back propagation learning algorithm is the most popular and ..... [10] Alsaleh, M. I., Numerical modeling for strain localization in ...

  18. Estimation of genetic parameters for carcass traits in Japanese quail ...

    African Journals Online (AJOL)

    The aim of this study was to estimate genetic parameters of some carcass characteristics in the Japanese quail. For this aim, carcass weight (Cw), breast weight (Bw), leg weight (Lw), abdominal fat weight (AFw), carcass yield (CP), breast percentage (BP), leg percentage (LP) and abdominal fat percentage (AFP) were ...

  19. Tsunami Prediction and Earthquake Parameters Estimation in the Red Sea

    KAUST Repository

    Sawlan, Zaid A

    2012-12-01

    Tsunami concerns have increased in the world after the 2004 Indian Ocean tsunami and the 2011 Tohoku tsunami. Consequently, tsunami models have been developed rapidly in the last few years. One of the advanced tsunami models is the GeoClaw tsunami model introduced by LeVeque (2011). This model is adaptive and consistent. Because of different sources of uncertainties in the model, observations are needed to improve model prediction through a data assimilation framework. Model inputs are earthquake parameters and topography. This thesis introduces a real-time tsunami forecasting method that combines tsunami model with observations using a hybrid ensemble Kalman filter and ensemble Kalman smoother. The filter is used for state prediction while the smoother operates smoothing to estimate the earthquake parameters. This method reduces the error produced by uncertain inputs. In addition, state-parameter EnKF is implemented to estimate earthquake parameters. Although number of observations is small, estimated parameters generates a better tsunami prediction than the model. Methods and results of prediction experiments in the Red Sea are presented and the prospect of developing an operational tsunami prediction system in the Red Sea is discussed.

  20. Dual ant colony operational modal analysis parameter estimation method

    Science.gov (United States)

    Sitarz, Piotr; Powałka, Bartosz

    2018-01-01

    Operational Modal Analysis (OMA) is a common technique used to examine the dynamic properties of a system. Contrary to experimental modal analysis, the input signal is generated in object ambient environment. Operational modal analysis mainly aims at determining the number of pole pairs and at estimating modal parameters. Many methods are used for parameter identification. Some methods operate in time while others in frequency domain. The former use correlation functions, the latter - spectral density functions. However, while some methods require the user to select poles from a stabilisation diagram, others try to automate the selection process. Dual ant colony operational modal analysis parameter estimation method (DAC-OMA) presents a new approach to the problem, avoiding issues involved in the stabilisation diagram. The presented algorithm is fully automated. It uses deterministic methods to define the interval of estimated parameters, thus reducing the problem to optimisation task which is conducted with dedicated software based on ant colony optimisation algorithm. The combination of deterministic methods restricting parameter intervals and artificial intelligence yields very good results, also for closely spaced modes and significantly varied mode shapes within one measurement point.

  1. Accuracy and sensitivity analysis on seismic anisotropy parameter estimation

    Science.gov (United States)

    Yan, Fuyong; Han, De-Hua

    2018-04-01

    There is significant uncertainty in measuring the Thomsen’s parameter δ in laboratory even though the dimensions and orientations of the rock samples are known. It is expected that more challenges will be encountered in the estimating of the seismic anisotropy parameters from field seismic data. Based on Monte Carlo simulation of vertical transversely isotropic layer cake model using the database of laboratory anisotropy measurement from the literature, we apply the commonly used quartic non-hyperbolic reflection moveout equation to estimate the seismic anisotropy parameters and test its accuracy and sensitivities to the source-receive offset, vertical interval velocity error and time picking error. The testing results show that the methodology works perfectly for noise-free synthetic data with short spread length. However, this method is extremely sensitive to the time picking error caused by mild random noises, and it requires the spread length to be greater than the depth of the reflection event. The uncertainties increase rapidly for the deeper layers and the estimated anisotropy parameters can be very unreliable for a layer with more than five overlain layers. It is possible that an isotropic formation can be misinterpreted as a strong anisotropic formation. The sensitivity analysis should provide useful guidance on how to group the reflection events and build a suitable geological model for anisotropy parameter inversion.

  2. Multiple leakage localization and leak size estimation in water networks

    NARCIS (Netherlands)

    Abbasi, N.; Habibi, H.; Hurkens, C.A.J.; Klabbers, M.D.; Tijsseling, A.S.; Eijndhoven, van S.J.L.

    2012-01-01

    Water distribution networks experience considerable losses due to leakage, often at multiple locations simultaneously. Leakage detection and localization based on sensor placement and online pressure monitoring could be fast and economical. Using the difference between estimated and measured

  3. Estimation of Parameters in Mean-Reverting Stochastic Systems

    Directory of Open Access Journals (Sweden)

    Tianhai Tian

    2014-01-01

    Full Text Available Stochastic differential equation (SDE is a very important mathematical tool to describe complex systems in which noise plays an important role. SDE models have been widely used to study the dynamic properties of various nonlinear systems in biology, engineering, finance, and economics, as well as physical sciences. Since a SDE can generate unlimited numbers of trajectories, it is difficult to estimate model parameters based on experimental observations which may represent only one trajectory of the stochastic model. Although substantial research efforts have been made to develop effective methods, it is still a challenge to infer unknown parameters in SDE models from observations that may have large variations. Using an interest rate model as a test problem, in this work we use the Bayesian inference and Markov Chain Monte Carlo method to estimate unknown parameters in SDE models.

  4. Estimating Arrhenius parameters using temperature programmed molecular dynamics

    International Nuclear Information System (INIS)

    Imandi, Venkataramana; Chatterjee, Abhijit

    2016-01-01

    Kinetic rates at different temperatures and the associated Arrhenius parameters, whenever Arrhenius law is obeyed, are efficiently estimated by applying maximum likelihood analysis to waiting times collected using the temperature programmed molecular dynamics method. When transitions involving many activated pathways are available in the dataset, their rates may be calculated using the same collection of waiting times. Arrhenius behaviour is ascertained by comparing rates at the sampled temperatures with ones from the Arrhenius expression. Three prototype systems with corrugated energy landscapes, namely, solvated alanine dipeptide, diffusion at the metal-solvent interphase, and lithium diffusion in silicon, are studied to highlight various aspects of the method. The method becomes particularly appealing when the Arrhenius parameters can be used to find rates at low temperatures where transitions are rare. Systematic coarse-graining of states can further extend the time scales accessible to the method. Good estimates for the rate parameters are obtained with 500-1000 waiting times.

  5. Estimating Arrhenius parameters using temperature programmed molecular dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Imandi, Venkataramana; Chatterjee, Abhijit, E-mail: abhijit@che.iitb.ac.in [Department of Chemical Engineering, Indian Institute of Technology Bombay, Mumbai 400076 (India)

    2016-07-21

    Kinetic rates at different temperatures and the associated Arrhenius parameters, whenever Arrhenius law is obeyed, are efficiently estimated by applying maximum likelihood analysis to waiting times collected using the temperature programmed molecular dynamics method. When transitions involving many activated pathways are available in the dataset, their rates may be calculated using the same collection of waiting times. Arrhenius behaviour is ascertained by comparing rates at the sampled temperatures with ones from the Arrhenius expression. Three prototype systems with corrugated energy landscapes, namely, solvated alanine dipeptide, diffusion at the metal-solvent interphase, and lithium diffusion in silicon, are studied to highlight various aspects of the method. The method becomes particularly appealing when the Arrhenius parameters can be used to find rates at low temperatures where transitions are rare. Systematic coarse-graining of states can further extend the time scales accessible to the method. Good estimates for the rate parameters are obtained with 500-1000 waiting times.

  6. Using Genetic Algorithm to Estimate Hydraulic Parameters of Unconfined Aquifers

    Directory of Open Access Journals (Sweden)

    Asghar Asghari Moghaddam

    2009-03-01

    Full Text Available Nowadays, optimization techniques such as Genetic Algorithms (GA have attracted wide attention among scientists for solving complicated engineering problems. In this article, pumping test data are used to assess the efficiency of GA in estimating unconfined aquifer parameters and a sensitivity analysis is carried out to propose an optimal arrangement of GA. For this purpose, hydraulic parameters of three sets of pumping test data are calculated by GA and they are compared with the results of graphical methods. The results indicate that the GA technique is an efficient, reliable, and powerful method for estimating the hydraulic parameters of unconfined aquifer and, further, that in cases of deficiency in pumping test data, it has a better performance than graphical methods.

  7. Summary of the DREAM8 Parameter Estimation Challenge: Toward Parameter Identification for Whole-Cell Models.

    Directory of Open Access Journals (Sweden)

    Jonathan R Karr

    2015-05-01

    Full Text Available Whole-cell models that explicitly represent all cellular components at the molecular level have the potential to predict phenotype from genotype. However, even for simple bacteria, whole-cell models will contain thousands of parameters, many of which are poorly characterized or unknown. New algorithms are needed to estimate these parameters and enable researchers to build increasingly comprehensive models. We organized the Dialogue for Reverse Engineering Assessments and Methods (DREAM 8 Whole-Cell Parameter Estimation Challenge to develop new parameter estimation algorithms for whole-cell models. We asked participants to identify a subset of parameters of a whole-cell model given the model's structure and in silico "experimental" data. Here we describe the challenge, the best performing methods, and new insights into the identifiability of whole-cell models. We also describe several valuable lessons we learned toward improving future challenges. Going forward, we believe that collaborative efforts supported by inexpensive cloud computing have the potential to solve whole-cell model parameter estimation.

  8. Global parameter estimation for thermodynamic models of transcriptional regulation.

    Science.gov (United States)

    Suleimenov, Yerzhan; Ay, Ahmet; Samee, Md Abul Hassan; Dresch, Jacqueline M; Sinha, Saurabh; Arnosti, David N

    2013-07-15

    Deciphering the mechanisms involved in gene regulation holds the key to understanding the control of central biological processes, including human disease, population variation, and the evolution of morphological innovations. New experimental techniques including whole genome sequencing and transcriptome analysis have enabled comprehensive modeling approaches to study gene regulation. In many cases, it is useful to be able to assign biological significance to the inferred model parameters, but such interpretation should take into account features that affect these parameters, including model construction and sensitivity, the type of fitness calculation, and the effectiveness of parameter estimation. This last point is often neglected, as estimation methods are often selected for historical reasons or for computational ease. Here, we compare the performance of two parameter estimation techniques broadly representative of local and global approaches, namely, a quasi-Newton/Nelder-Mead simplex (QN/NMS) method and a covariance matrix adaptation-evolutionary strategy (CMA-ES) method. The estimation methods were applied to a set of thermodynamic models of gene transcription applied to regulatory elements active in the Drosophila embryo. Measuring overall fit, the global CMA-ES method performed significantly better than the local QN/NMS method on high quality data sets, but this difference was negligible on lower quality data sets with increased noise or on data sets simplified by stringent thresholding. Our results suggest that the choice of parameter estimation technique for evaluation of gene expression models depends both on quality of data, the nature of the models [again, remains to be established] and the aims of the modeling effort. Copyright © 2013 Elsevier Inc. All rights reserved.

  9. Multiple Estimation Architecture in Discrete-Time Adaptive Mixing Control

    Directory of Open Access Journals (Sweden)

    Simone Baldi

    2013-05-01

    Full Text Available Adaptive mixing control (AMC is a recently developed control scheme for uncertain plants, where the control action coming from a bank of precomputed controller is mixed based on the parameter estimates generated by an on-line parameter estimator. Even if the stability of the control scheme, also in the presence of modeling errors and disturbances, has been shown analytically, its transient performance might be sensitive to the initial conditions of the parameter estimator. In particular, for some initial conditions, transient oscillations may not be acceptable in practical applications. In order to account for such a possible phenomenon and to improve the learning capability of the adaptive scheme, in this paper a new mixing architecture is developed, involving the use of parallel parameter estimators, or multi-estimators, each one working on a small subset of the uncertainty set. A supervisory logic, using performance signals based on the past and present estimation error, selects the parameter estimate to determine the mixing of the controllers. The stability and robustness properties of the resulting approach, referred to as multi-estimator adaptive mixing control (Multi-AMC, are analytically established. Besides, extensive simulations demonstrate that the scheme improves the transient performance of the original AMC with a single estimator. The control scheme and the analysis are carried out in a discrete-time framework, for easier implementation of the method in digital control.

  10. Estimating model parameters in nonautonomous chaotic systems using synchronization

    International Nuclear Information System (INIS)

    Yang, Xiaoli; Xu, Wei; Sun, Zhongkui

    2007-01-01

    In this Letter, a technique is addressed for estimating unknown model parameters of multivariate, in particular, nonautonomous chaotic systems from time series of state variables. This technique uses an adaptive strategy for tracking unknown parameters in addition to a linear feedback coupling for synchronizing systems, and then some general conditions, by means of the periodic version of the LaSalle invariance principle for differential equations, are analytically derived to ensure precise evaluation of unknown parameters and identical synchronization between the concerned experimental system and its corresponding receiver one. Exemplifies are presented by employing a parametrically excited 4D new oscillator and an additionally excited Ueda oscillator. The results of computer simulations reveal that the technique not only can quickly track the desired parameter values but also can rapidly respond to changes in operating parameters. In addition, the technique can be favorably robust against the effect of noise when the experimental system is corrupted by bounded disturbance and the normalized absolute error of parameter estimation grows almost linearly with the cutoff value of noise strength in simulation

  11. Influence of measurement errors and estimated parameters on combustion diagnosis

    International Nuclear Information System (INIS)

    Payri, F.; Molina, S.; Martin, J.; Armas, O.

    2006-01-01

    Thermodynamic diagnosis models are valuable tools for the study of Diesel combustion. Inputs required by such models comprise measured mean and instantaneous variables, together with suitable values for adjustable parameters used in different submodels. In the case of measured variables, one may estimate the uncertainty associated with measurement errors; however, the influence of errors in model parameter estimation may not be so easily established on an experimental basis. In this paper, a simulated pressure cycle has been used along with known input parameters, so that any uncertainty in the inputs is avoided. Then, the influence of errors in measured variables and geometric and heat transmission parameters on the results of a diagnosis combustion model for direct injection diesel engines have been studied. This procedure allowed to establish the relative importance of these parameters and to set limits to the maximal errors of the model, accounting for both the maximal expected errors in the input parameters and the sensitivity of the model to those errors

  12. Stable Parameter Estimation for Autoregressive Equations with Random Coefficients

    Directory of Open Access Journals (Sweden)

    V. B. Goryainov

    2014-01-01

    Full Text Available In recent yearsthere has been a growing interest in non-linear time series models. They are more flexible than traditional linear models and allow more adequate description of real data. Among these models a autoregressive model with random coefficients plays an important role. It is widely used in various fields of science and technology, for example, in physics, biology, economics and finance. The model parameters are the mean values of autoregressive coefficients. Their evaluation is the main task of model identification. The basic method of estimation is still the least squares method, which gives good results for Gaussian time series, but it is quite sensitive to even small disturbancesin the assumption of Gaussian observations. In this paper we propose estimates, which generalize the least squares estimate in the sense that the quadratic objective function is replaced by an arbitrary convex and even function. Reasonable choice of objective function allows you to keep the benefits of the least squares estimate and eliminate its shortcomings. In particular, you can make it so that they will be almost as effective as the least squares estimate in the Gaussian case, but almost never loose in accuracy with small deviations of the probability distribution of the observations from the Gaussian distribution.The main result is the proof of consistency and asymptotic normality of the proposed estimates in the particular case of the one-parameter model describing the stationary process with finite variance. Another important result is the finding of the asymptotic relative efficiency of the proposed estimates in relation to the least squares estimate. This allows you to compare the two estimates, depending on the probability distribution of innovation process and of autoregressive coefficients. The results can be used to identify an autoregressive process, especially with nonGaussian nature, and/or of autoregressive processes observed with gross

  13. Pedotransfer functions estimating soil hydraulic properties using different soil parameters

    DEFF Research Database (Denmark)

    Børgesen, Christen Duus; Iversen, Bo Vangsø; Jacobsen, Ole Hørbye

    2008-01-01

    Estimates of soil hydraulic properties using pedotransfer functions (PTF) are useful in many studies such as hydrochemical modelling and soil mapping. The objective of this study was to calibrate and test parametric PTFs that predict soil water retention and unsaturated hydraulic conductivity...... parameters. The PTFs are based on neural networks and the Bootstrap method using different sets of predictors and predict the van Genuchten/Mualem parameters. A Danish soil data set (152 horizons) dominated by sandy and sandy loamy soils was used in the development of PTFs to predict the Mualem hydraulic...... conductivity parameters. A larger data set (1618 horizons) with a broader textural range was used in the development of PTFs to predict the van Genuchten parameters. The PTFs using either three or seven textural classes combined with soil organic mater and bulk density gave the most reliable predictions...

  14. Consistent Parameter and Transfer Function Estimation using Context Free Grammars

    Science.gov (United States)

    Klotz, Daniel; Herrnegger, Mathew; Schulz, Karsten

    2017-04-01

    This contribution presents a method for the inference of transfer functions for rainfall-runoff models. Here, transfer functions are defined as parametrized (functional) relationships between a set of spatial predictors (e.g. elevation, slope or soil texture) and model parameters. They are ultimately used for estimation of consistent, spatially distributed model parameters from a limited amount of lumped global parameters. Additionally, they provide a straightforward method for parameter extrapolation from one set of basins to another and can even be used to derive parameterizations for multi-scale models [see: Samaniego et al., 2010]. Yet, currently an actual knowledge of the transfer functions is often implicitly assumed. As a matter of fact, for most cases these hypothesized transfer functions can rarely be measured and often remain unknown. Therefore, this contribution presents a general method for the concurrent estimation of the structure of transfer functions and their respective (global) parameters. Note, that by consequence an estimation of the distributed parameters of the rainfall-runoff model is also undertaken. The method combines two steps to achieve this. The first generates different possible transfer functions. The second then estimates the respective global transfer function parameters. The structural estimation of the transfer functions is based on the context free grammar concept. Chomsky first introduced context free grammars in linguistics [Chomsky, 1956]. Since then, they have been widely applied in computer science. But, to the knowledge of the authors, they have so far not been used in hydrology. Therefore, the contribution gives an introduction to context free grammars and shows how they can be constructed and used for the structural inference of transfer functions. This is enabled by new methods from evolutionary computation, such as grammatical evolution [O'Neill, 2001], which make it possible to exploit the constructed grammar as a

  15. METAHEURISTIC OPTIMIZATION METHODS FOR PARAMETERS ESTIMATION OF DYNAMIC SYSTEMS

    Directory of Open Access Journals (Sweden)

    V. Panteleev Andrei

    2017-01-01

    Full Text Available The article considers the usage of metaheuristic methods of constrained global optimization: “Big Bang - Big Crunch”, “Fireworks Algorithm”, “Grenade Explosion Method” in parameters of dynamic systems estimation, described with algebraic-differential equations. Parameters estimation is based upon the observation results from mathematical model behavior. Their values are derived after criterion minimization, which describes the total squared error of state vector coordinates from the deduced ones with precise values observation at different periods of time. Paral- lelepiped type restriction is imposed on the parameters values. Used for solving problems, metaheuristic methods of constrained global extremum don’t guarantee the result, but allow to get a solution of a rather good quality in accepta- ble amount of time. The algorithm of using metaheuristic methods is given. Alongside with the obvious methods for solving algebraic-differential equation systems, it is convenient to use implicit methods for solving ordinary differen- tial equation systems. Two ways of solving the problem of parameters evaluation are given, those parameters differ in their mathematical model. In the first example, a linear mathematical model describes the chemical action parameters change, and in the second one, a nonlinear mathematical model describes predator-prey dynamics, which characterize the changes in both kinds’ population. For each of the observed examples there are calculation results from all the three methods of optimization, there are also some recommendations for how to choose methods parameters. The obtained numerical results have demonstrated the efficiency of the proposed approach. The deduced parameters ap- proximate points slightly differ from the best known solutions, which were deduced differently. To refine the results one should apply hybrid schemes that combine classical methods of optimization of zero, first and second orders and

  16. Circuit realization, chaos synchronization and estimation of parameters of a hyperchaotic system with unknown parameters

    Directory of Open Access Journals (Sweden)

    A. Elsonbaty

    2014-10-01

    Full Text Available In this article, the adaptive chaos synchronization technique is implemented by an electronic circuit and applied to the hyperchaotic system proposed by Chen et al. We consider the more realistic and practical case where all the parameters of the master system are unknowns. We propose and implement an electronic circuit that performs the estimation of the unknown parameters and the updating of the parameters of the slave system automatically, and hence it achieves the synchronization. To the best of our knowledge, this is the first attempt to implement a circuit that estimates the values of the unknown parameters of chaotic system and achieves synchronization. The proposed circuit has a variety of suitable real applications related to chaos encryption and cryptography. The outputs of the implemented circuits and numerical simulation results are shown to view the performance of the synchronized system and the proposed circuit.

  17. Parameter estimation in nonlinear models for pesticide degradation

    International Nuclear Information System (INIS)

    Richter, O.; Pestemer, W.; Bunte, D.; Diekkrueger, B.

    1991-01-01

    A wide class of environmental transfer models is formulated as ordinary or partial differential equations. With the availability of fast computers, the numerical solution of large systems became feasible. The main difficulty in performing a realistic and convincing simulation of the fate of a substance in the biosphere is not the implementation of numerical techniques but rather the incomplete data basis for parameter estimation. Parameter estimation is a synonym for statistical and numerical procedures to derive reasonable numerical values for model parameters from data. The classical method is the familiar linear regression technique which dates back to the 18th century. Because it is easy to handle, linear regression has long been established as a convenient tool for analysing relationships. However, the wide use of linear regression has led to an overemphasis of linear relationships. In nature, most relationships are nonlinear and linearization often gives a poor approximation of reality. Furthermore, pure regression models are not capable to map the dynamics of a process. Therefore, realistic models involve the evolution in time (and space). This leads in a natural way to the formulation of differential equations. To establish the link between data and dynamical models, numerical advanced parameter identification methods have been developed in recent years. This paper demonstrates the application of these techniques to estimation problems in the field of pesticide dynamics. (7 refs., 5 figs., 2 tabs.)

  18. Estimation of common cause failure parameters with periodic tests

    Energy Technology Data Exchange (ETDEWEB)

    Barros, Anne [Institut Charles Delaunay - Universite de technologie de Troyes - FRE CNRS 2848, 12, rue Marie Curie - BP 2060 -10010 Troyes cedex (France)], E-mail: anne.barros@utt.fr; Grall, Antoine [Institut Charles Delaunay - Universite de technologie de Troyes - FRE CNRS 2848, 12, rue Marie Curie - BP 2060 -10010 Troyes cedex (France); Vasseur, Dominique [Electricite de France, EDF R and D - Industrial Risk Management Department 1, av. du General de Gaulle- 92141 Clamart (France)

    2009-04-15

    In the specific case of safety systems, CCF parameters estimators for standby components depend on the periodic test schemes. Classically, the testing schemes are either staggered (alternation of tests on redundant components) or non-staggered (all components are tested at the same time). In reality, periodic tests schemes performed on safety components are more complex and combine staggered tests, when the plant is in operation, to non-staggered tests during maintenance and refueling outage periods of the installation. Moreover, the CCF parameters estimators described in the US literature are derived in a consistent way with US Technical Specifications constraints that do not apply on the French Nuclear Power Plants for staggered tests on standby components. Given these issues, the evaluation of CCF parameters from the operating feedback data available within EDF implies the development of methodologies that integrate the testing schemes specificities. This paper aims to formally propose a solution for the estimation of CCF parameters given two distinct difficulties respectively related to a mixed testing scheme and to the consistency with EDF's specific practices inducing systematic non-simultaneity of the observed failures in a staggered testing scheme.

  19. Nonlinear Parameter Estimation in Microbiological Degradation Systems and Statistic Test for Common Estimation

    DEFF Research Database (Denmark)

    Sommer, Helle Mølgaard; Holst, Helle; Spliid, Henrik

    1995-01-01

    Three identical microbiological experiments were carried out and analysed in order to examine the variability of the parameter estimates. The microbiological system consisted of a substrate (toluene) and a biomass (pure culture) mixed together in an aquifer medium. The degradation of the substrate...... and the growth of the biomass are described by the Monod model consisting of two nonlinear coupled first-order differential equations. The objective of this study was to estimate the kinetic parameters in the Monod model and to test whether the parameters from the three identical experiments have the same values....... Estimation of the parameters was obtained using an iterative maximum likelihood method and the test used was an approximative likelihood ratio test. The test showed that the three sets of parameters were identical only on a 4% alpha level....

  20. On the identification of multiple space dependent ionic parameters in cardiac electrophysiology modelling

    Science.gov (United States)

    Abidi, Yassine; Bellassoued, Mourad; Mahjoub, Moncef; Zemzemi, Nejib

    2018-03-01

    In this paper, we consider the inverse problem of space dependent multiple ionic parameters identification in cardiac electrophysiology modelling from a set of observations. We use the monodomain system known as a state-of-the-art model in cardiac electrophysiology and we consider a general Hodgkin-Huxley formalism to describe the ionic exchanges at the microscopic level. This formalism covers many physiological transmembrane potential models including those in cardiac electrophysiology. Our main result is the proof of the uniqueness and a Lipschitz stability estimate of ion channels conductance parameters based on some observations on an arbitrary subdomain. The key idea is a Carleman estimate for a parabolic operator with multiple coefficients and an ordinary differential equation system.

  1. PWR system simulation and parameter estimation with neural networks

    International Nuclear Information System (INIS)

    Akkurt, Hatice; Colak, Uener

    2002-01-01

    A detailed nonlinear model for a typical PWR system has been considered for the development of simulation software. Each component in the system has been represented by appropriate differential equations. The SCILAB software was used for solving nonlinear equations to simulate steady-state and transient operational conditions. Overall system has been constructed by connecting individual components to each other. The validity of models for individual components and overall system has been verified. The system response against given transients have been analyzed. A neural network has been utilized to estimate system parameters during transients. Different transients have been imposed in training and prediction stages with neural networks. Reactor power and system reactivity during the transient event have been predicted by the neural network. Results show that neural networks estimations are in good agreement with the calculated response of the reactor system. The maximum errors are within ±0.254% for power and between -0.146 and 0.353% for reactivity prediction cases. Steam generator parameters, pressure and water level, are also successfully predicted by the neural network employed in this study. The noise imposed on the input parameters of the neural network deteriorates the power estimation capability whereas the reactivity estimation capability is not significantly affected

  2. PWR system simulation and parameter estimation with neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Akkurt, Hatice; Colak, Uener E-mail: uc@nuke.hacettepe.edu.tr

    2002-11-01

    A detailed nonlinear model for a typical PWR system has been considered for the development of simulation software. Each component in the system has been represented by appropriate differential equations. The SCILAB software was used for solving nonlinear equations to simulate steady-state and transient operational conditions. Overall system has been constructed by connecting individual components to each other. The validity of models for individual components and overall system has been verified. The system response against given transients have been analyzed. A neural network has been utilized to estimate system parameters during transients. Different transients have been imposed in training and prediction stages with neural networks. Reactor power and system reactivity during the transient event have been predicted by the neural network. Results show that neural networks estimations are in good agreement with the calculated response of the reactor system. The maximum errors are within {+-}0.254% for power and between -0.146 and 0.353% for reactivity prediction cases. Steam generator parameters, pressure and water level, are also successfully predicted by the neural network employed in this study. The noise imposed on the input parameters of the neural network deteriorates the power estimation capability whereas the reactivity estimation capability is not significantly affected.

  3. Tracking of nuclear reactor parameters via recursive non linear estimation

    International Nuclear Information System (INIS)

    Pages Fita, J.; Alengrin, G.; Aguilar Martin, J.; Zwingelstein, M.

    1975-01-01

    The usefulness of nonlinear estimation in the supervision of nuclear reactors, as well for reactivity determination as for on-line modelisation in order to detect eventual and unwanted changes in working operation is illustrated. It is dealt with the reactivity estimation using an a priori dynamical model under the hypothesis of one group of delayed neutrons (measurements were done with an ionisation chamber). The determination of the reactivity using such measurements appears as a nonlinear estimation procedure derived from a particular form of nonlinear filter. Observed inputs being demand of power and inside temperature, and output being the reactivity balance, a recursive algorithm is derived for the estimation of the parameters that define the actual behavior of the reactor. Example of treatment of real data is given [fr

  4. A Particle Smoother with Sequential Importance Resampling for soil hydraulic parameter estimation: A lysimeter experiment

    Science.gov (United States)

    Montzka, Carsten; Hendricks Franssen, Harrie-Jan; Moradkhani, Hamid; Pütz, Thomas; Han, Xujun; Vereecken, Harry

    2013-04-01

    An adequate description of soil hydraulic properties is essential for a good performance of hydrological forecasts. So far, several studies showed that data assimilation could reduce the parameter uncertainty by considering soil moisture observations. However, these observations and also the model forcings were recorded with a specific measurement error. It seems a logical step to base state updating and parameter estimation on observations made at multiple time steps, in order to reduce the influence of outliers at single time steps given measurement errors and unknown model forcings. Such outliers could result in erroneous state estimation as well as inadequate parameters. This has been one of the reasons to use a smoothing technique as implemented for Bayesian data assimilation methods such as the Ensemble Kalman Filter (i.e. Ensemble Kalman Smoother). Recently, an ensemble-based smoother has been developed for state update with a SIR particle filter. However, this method has not been used for dual state-parameter estimation. In this contribution we present a Particle Smoother with sequentially smoothing of particle weights for state and parameter resampling within a time window as opposed to the single time step data assimilation used in filtering techniques. This can be seen as an intermediate variant between a parameter estimation technique using global optimization with estimation of single parameter sets valid for the whole period, and sequential Monte Carlo techniques with estimation of parameter sets evolving from one time step to another. The aims are i) to improve the forecast of evaporation and groundwater recharge by estimating hydraulic parameters, and ii) to reduce the impact of single erroneous model inputs/observations by a smoothing method. In order to validate the performance of the proposed method in a real world application, the experiment is conducted in a lysimeter environment.

  5. Parameter Estimation as a Problem in Statistical Thermodynamics.

    Science.gov (United States)

    Earle, Keith A; Schneider, David J

    2011-03-14

    In this work, we explore the connections between parameter fitting and statistical thermodynamics using the maxent principle of Jaynes as a starting point. In particular, we show how signal averaging may be described by a suitable one particle partition function, modified for the case of a variable number of particles. These modifications lead to an entropy that is extensive in the number of measurements in the average. Systematic error may be interpreted as a departure from ideal gas behavior. In addition, we show how to combine measurements from different experiments in an unbiased way in order to maximize the entropy of simultaneous parameter fitting. We suggest that fit parameters may be interpreted as generalized coordinates and the forces conjugate to them may be derived from the system partition function. From this perspective, the parameter fitting problem may be interpreted as a process where the system (spectrum) does work against internal stresses (non-optimum model parameters) to achieve a state of minimum free energy/maximum entropy. Finally, we show how the distribution function allows us to define a geometry on parameter space, building on previous work[1, 2]. This geometry has implications for error estimation and we outline a program for incorporating these geometrical insights into an automated parameter fitting algorithm.

  6. Genetic Parameter Estimates for Metabolizing Two Common Pharmaceuticals in Swine

    Directory of Open Access Journals (Sweden)

    Jeremy T. Howard

    2018-02-01

    Full Text Available In livestock, the regulation of drugs used to treat livestock has received increased attention and it is currently unknown how much of the phenotypic variation in drug metabolism is due to the genetics of an animal. Therefore, the objective of the study was to determine the amount of phenotypic variation in fenbendazole and flunixin meglumine drug metabolism due to genetics. The population consisted of crossbred female and castrated male nursery pigs (n = 198 that were sired by boars represented by four breeds. The animals were spread across nine batches. Drugs were administered intravenously and blood collected a minimum of 10 times over a 48 h period. Genetic parameters for the parent drug and metabolite concentration within each drug were estimated based on pharmacokinetics (PK parameters or concentrations across time utilizing a random regression model. The PK parameters were estimated using a non-compartmental analysis. The PK model included fixed effects of sex and breed of sire along with random sire and batch effects. The random regression model utilized Legendre polynomials and included a fixed population concentration curve, sex, and breed of sire effects along with a random sire deviation from the population curve and batch effect. The sire effect included the intercept for all models except for the fenbendazole metabolite (i.e., intercept and slope. The mean heritability across PK parameters for the fenbendazole and flunixin meglumine parent drug (metabolite was 0.15 (0.18 and 0.31 (0.40, respectively. For the parent drug (metabolite, the mean heritability across time was 0.27 (0.60 and 0.14 (0.44 for fenbendazole and flunixin meglumine, respectively. The errors surrounding the heritability estimates for the random regression model were smaller compared to estimates obtained from PK parameters. Across both the PK and plasma drug concentration across model, a moderate heritability was estimated. The model that utilized the plasma drug

  7. Genetic Parameter Estimates for Metabolizing Two Common Pharmaceuticals in Swine

    Science.gov (United States)

    Howard, Jeremy T.; Ashwell, Melissa S.; Baynes, Ronald E.; Brooks, James D.; Yeatts, James L.; Maltecca, Christian

    2018-01-01

    In livestock, the regulation of drugs used to treat livestock has received increased attention and it is currently unknown how much of the phenotypic variation in drug metabolism is due to the genetics of an animal. Therefore, the objective of the study was to determine the amount of phenotypic variation in fenbendazole and flunixin meglumine drug metabolism due to genetics. The population consisted of crossbred female and castrated male nursery pigs (n = 198) that were sired by boars represented by four breeds. The animals were spread across nine batches. Drugs were administered intravenously and blood collected a minimum of 10 times over a 48 h period. Genetic parameters for the parent drug and metabolite concentration within each drug were estimated based on pharmacokinetics (PK) parameters or concentrations across time utilizing a random regression model. The PK parameters were estimated using a non-compartmental analysis. The PK model included fixed effects of sex and breed of sire along with random sire and batch effects. The random regression model utilized Legendre polynomials and included a fixed population concentration curve, sex, and breed of sire effects along with a random sire deviation from the population curve and batch effect. The sire effect included the intercept for all models except for the fenbendazole metabolite (i.e., intercept and slope). The mean heritability across PK parameters for the fenbendazole and flunixin meglumine parent drug (metabolite) was 0.15 (0.18) and 0.31 (0.40), respectively. For the parent drug (metabolite), the mean heritability across time was 0.27 (0.60) and 0.14 (0.44) for fenbendazole and flunixin meglumine, respectively. The errors surrounding the heritability estimates for the random regression model were smaller compared to estimates obtained from PK parameters. Across both the PK and plasma drug concentration across model, a moderate heritability was estimated. The model that utilized the plasma drug

  8. ESTIMATION OF DISTANCES TO STARS WITH STELLAR PARAMETERS FROM LAMOST

    Energy Technology Data Exchange (ETDEWEB)

    Carlin, Jeffrey L.; Newberg, Heidi Jo [Department of Physics, Applied Physics and Astronomy, Rensselaer Polytechnic Institute, Troy, NY 12180 (United States); Liu, Chao; Deng, Licai; Li, Guangwei; Luo, A-Li; Wu, Yue; Yang, Ming; Zhang, Haotong [Key Lab of Optical Astronomy, National Astronomical Observatories, Chinese Academy of Sciences, Beijing 100012 (China); Beers, Timothy C. [Department of Physics and JINA: Joint Institute for Nuclear Astrophysics, University of Notre Dame, 225 Nieuwland Science Hall, Notre Dame, IN 46556 (United States); Chen, Li; Hou, Jinliang; Smith, Martin C. [Shanghai Astronomical Observatory, 80 Nandan Road, Shanghai 200030 (China); Guhathakurta, Puragra [UCO/Lick Observatory, Department of Astronomy and Astrophysics, University of California, Santa Cruz, CA 95064 (United States); Hou, Yonghui [Nanjing Institute of Astronomical Optics and Technology, National Astronomical Observatories, Chinese Academy of Sciences, Nanjing 210042 (China); Lépine, Sébastien [Department of Physics and Astronomy, Georgia State University, 25 Park Place, Suite 605, Atlanta, GA 30303 (United States); Yanny, Brian [Fermi National Accelerator Laboratory, P.O. Box 500, Batavia, IL 60510 (United States); Zheng, Zheng, E-mail: jeffreylcarlin@gmail.com [Department of Physics and Astronomy, University of Utah, UT 84112 (United States)

    2015-07-15

    We present a method to estimate distances to stars with spectroscopically derived stellar parameters. The technique is a Bayesian approach with likelihood estimated via comparison of measured parameters to a grid of stellar isochrones, and returns a posterior probability density function for each star’s absolute magnitude. This technique is tailored specifically to data from the Large Sky Area Multi-object Fiber Spectroscopic Telescope (LAMOST) survey. Because LAMOST obtains roughly 3000 stellar spectra simultaneously within each ∼5° diameter “plate” that is observed, we can use the stellar parameters of the observed stars to account for the stellar luminosity function and target selection effects. This removes biasing assumptions about the underlying populations, both due to predictions of the luminosity function from stellar evolution modeling, and from Galactic models of stellar populations along each line of sight. Using calibration data of stars with known distances and stellar parameters, we show that our method recovers distances for most stars within ∼20%, but with some systematic overestimation of distances to halo giants. We apply our code to the LAMOST database, and show that the current precision of LAMOST stellar parameters permits measurements of distances with ∼40% error bars. This precision should improve as the LAMOST data pipelines continue to be refined.

  9. Bayesian Parameter Estimation via Filtering and Functional Approximations

    KAUST Repository

    Matthies, Hermann G.

    2016-11-25

    The inverse problem of determining parameters in a model by comparing some output of the model with observations is addressed. This is a description for what hat to be done to use the Gauss-Markov-Kalman filter for the Bayesian estimation and updating of parameters in a computational model. This is a filter acting on random variables, and while its Monte Carlo variant --- the Ensemble Kalman Filter (EnKF) --- is fairly straightforward, we subsequently only sketch its implementation with the help of functional representations.

  10. Bayesian Parameter Estimation via Filtering and Functional Approximations

    KAUST Repository

    Matthies, Hermann G.; Litvinenko, Alexander; Rosic, Bojana V.; Zander, Elmar

    2016-01-01

    The inverse problem of determining parameters in a model by comparing some output of the model with observations is addressed. This is a description for what hat to be done to use the Gauss-Markov-Kalman filter for the Bayesian estimation and updating of parameters in a computational model. This is a filter acting on random variables, and while its Monte Carlo variant --- the Ensemble Kalman Filter (EnKF) --- is fairly straightforward, we subsequently only sketch its implementation with the help of functional representations.

  11. Parameter and state estimation in nonlinear dynamical systems

    Science.gov (United States)

    Creveling, Daniel R.

    This thesis is concerned with the problem of state and parameter estimation in nonlinear systems. The need to evaluate unknown parameters in models of nonlinear physical, biophysical and engineering systems occurs throughout the development of phenomenological or reduced models of dynamics. When verifying and validating these models, it is important to incorporate information from observations in an efficient manner. Using the idea of synchronization of nonlinear dynamical systems, this thesis develops a framework for presenting data to a candidate model of a physical process in a way that makes efficient use of the measured data while allowing for estimation of the unknown parameters in the model. The approach presented here builds on existing work that uses synchronization as a tool for parameter estimation. Some critical issues of stability in that work are addressed and a practical framework is developed for overcoming these difficulties. The central issue is the choice of coupling strength between the model and data. If the coupling is too strong, the model will reproduce the measured data regardless of the adequacy of the model or correctness of the parameters. If the coupling is too weak, nonlinearities in the dynamics could lead to complex dynamics rendering any cost function comparing the model to the data inadequate for the determination of model parameters. Two methods are introduced which seek to balance the need for coupling with the desire to allow the model to evolve in its natural manner without coupling. One method, 'balanced' synchronization, adds to the synchronization cost function a requirement that the conditional Lyapunov exponents of the model system, conditioned on being driven by the data, remain negative but small in magnitude. Another method allows the coupling between the data and the model to vary in time according to a specific form of differential equation. The coupling dynamics is damped to allow for a tendency toward zero coupling

  12. Estimation of Medium Voltage Cable Parameters for PD Detection

    DEFF Research Database (Denmark)

    Villefrance, Rasmus; Holbøll, Joachim T.; Henriksen, Mogens

    1998-01-01

    Medium voltage cable characteristics have been determined with respect to the parameters having influence on the evaluation of results from PD-measurements on paper/oil and XLPE-cables. In particular, parameters essential for discharge quantification and location were measured. In order to relate...... and phase constants. A method to estimate this propagation constant, based on high frequency measurements, will be presented and will be applied to different cable types under different conditions. The influence of temperature and test voltage was investigated. The relevance of the results for cable...

  13. Estimating parameters for probabilistic linkage of privacy-preserved datasets.

    Science.gov (United States)

    Brown, Adrian P; Randall, Sean M; Ferrante, Anna M; Semmens, James B; Boyd, James H

    2017-07-10

    Probabilistic record linkage is a process used to bring together person-based records from within the same dataset (de-duplication) or from disparate datasets using pairwise comparisons and matching probabilities. The linkage strategy and associated match probabilities are often estimated through investigations into data quality and manual inspection. However, as privacy-preserved datasets comprise encrypted data, such methods are not possible. In this paper, we present a method for estimating the probabilities and threshold values for probabilistic privacy-preserved record linkage using Bloom filters. Our method was tested through a simulation study using synthetic data, followed by an application using real-world administrative data. Synthetic datasets were generated with error rates from zero to 20% error. Our method was used to estimate parameters (probabilities and thresholds) for de-duplication linkages. Linkage quality was determined by F-measure. Each dataset was privacy-preserved using separate Bloom filters for each field. Match probabilities were estimated using the expectation-maximisation (EM) algorithm on the privacy-preserved data. Threshold cut-off values were determined by an extension to the EM algorithm allowing linkage quality to be estimated for each possible threshold. De-duplication linkages of each privacy-preserved dataset were performed using both estimated and calculated probabilities. Linkage quality using the F-measure at the estimated threshold values was also compared to the highest F-measure. Three large administrative datasets were used to demonstrate the applicability of the probability and threshold estimation technique on real-world data. Linkage of the synthetic datasets using the estimated probabilities produced an F-measure that was comparable to the F-measure using calculated probabilities, even with up to 20% error. Linkage of the administrative datasets using estimated probabilities produced an F-measure that was higher

  14. Estimation of economic parameters of U.S. hydropower resources

    Energy Technology Data Exchange (ETDEWEB)

    Hall, Douglas G. [Idaho National Lab. (INL), Idaho Falls, ID (United States). Idaho National Engineering and Environmental Lab. (INEEL); Hunt, Richard T. [Idaho National Lab. (INL), Idaho Falls, ID (United States). Idaho National Engineering and Environmental Lab. (INEEL); Reeves, Kelly S. [Idaho National Lab. (INL), Idaho Falls, ID (United States). Idaho National Engineering and Environmental Lab. (INEEL); Carroll, Greg R. [Idaho National Lab. (INL), Idaho Falls, ID (United States). Idaho National Engineering and Environmental Lab. (INEEL)

    2003-06-01

    Tools for estimating the cost of developing and operating and maintaining hydropower resources in the form of regression curves were developed based on historical plant data. Development costs that were addressed included: licensing, construction, and five types of environmental mitigation. It was found that the data for each type of cost correlated well with plant capacity. A tool for estimating the annual and monthly electric generation of hydropower resources was also developed. Additional tools were developed to estimate the cost of upgrading a turbine or a generator. The development and operation and maintenance cost estimating tools, and the generation estimating tool were applied to 2,155 U.S. hydropower sites representing a total potential capacity of 43,036 MW. The sites included totally undeveloped sites, dams without a hydroelectric plant, and hydroelectric plants that could be expanded to achieve greater capacity. Site characteristics and estimated costs and generation for each site were assembled in a database in Excel format that is also included within the EERE Library under the title, “Estimation of Economic Parameters of U.S. Hydropower Resources - INL Hydropower Resource Economics Database.”

  15. Multiplication factor versus regression analysis in stature estimation from hand and foot dimensions.

    Science.gov (United States)

    Krishan, Kewal; Kanchan, Tanuj; Sharma, Abhilasha

    2012-05-01

    Estimation of stature is an important parameter in identification of human remains in forensic examinations. The present study is aimed to compare the reliability and accuracy of stature estimation and to demonstrate the variability in estimated stature and actual stature using multiplication factor and regression analysis methods. The study is based on a sample of 246 subjects (123 males and 123 females) from North India aged between 17 and 20 years. Four anthropometric measurements; hand length, hand breadth, foot length and foot breadth taken on the left side in each subject were included in the study. Stature was measured using standard anthropometric techniques. Multiplication factors were calculated and linear regression models were derived for estimation of stature from hand and foot dimensions. Derived multiplication factors and regression formula were applied to the hand and foot measurements in the study sample. The estimated stature from the multiplication factors and regression analysis was compared with the actual stature to find the error in estimated stature. The results indicate that the range of error in estimation of stature from regression analysis method is less than that of multiplication factor method thus, confirming that the regression analysis method is better than multiplication factor analysis in stature estimation. Copyright © 2012 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  16. Estimation of apparent kinetic parameters of polymer pyrolysis with complex thermal degradation behavior

    International Nuclear Information System (INIS)

    Srimachai, Taranee; Anantawaraskul, Siripon

    2010-01-01

    Full text: Thermal degradation behavior during polymer pyrolysis can typically be described using three apparent kinetic parameters (i.e., pre-exponential factor, activation energy, and reaction order). Several efficient techniques have been developed to estimate these apparent kinetic parameters for simple thermal degradation behavior (i.e., single apparent pyrolysis reaction). Unfortunately, these techniques cannot be directly extended to the case of polymer pyrolysis with complex thermal degradation behavior (i.e., multiple concurrent reactions forming single or multiple DTG peaks). In this work, we proposed a deconvolution method to determine the number of apparent reactions and estimate three apparent kinetic parameters and contribution of each reaction for polymer pyrolysis with complex thermal degradation behavior. The proposed technique was validated with the model and experimental pyrolysis data of several polymer blends with known compositions. The results showed that (1) the number of reaction and (2) three apparent kinetic parameters and contribution of each reaction can be estimated reasonably. The simulated DTG curves with estimated parameters also agree well with experimental DTG curves. (author)

  17. Probabilistic estimation of the constitutive parameters of polymers

    Directory of Open Access Journals (Sweden)

    Siviour C.R.

    2012-08-01

    Full Text Available The Mulliken-Boyce constitutive model predicts the dynamic response of crystalline polymers as a function of strain rate and temperature. This paper describes the Mulliken-Boyce model-based estimation of the constitutive parameters in a Bayesian probabilistic framework. Experimental data from dynamic mechanical analysis and dynamic compression of PVC samples over a wide range of strain rates are analyzed. Both experimental uncertainty and natural variations in the material properties are simultaneously considered as independent and joint distributions; the posterior probability distributions are shown and compared with prior estimates of the material constitutive parameters. Additionally, particular statistical distributions are shown to be effective at capturing the rate and temperature dependence of internal phase transitions in DMA data.

  18. PARAMETER ESTIMATION OF THE HYBRID CENSORED LOMAX DISTRIBUTION

    Directory of Open Access Journals (Sweden)

    Samir Kamel Ashour

    2010-12-01

    Full Text Available Survival analysis is used in various fields for analyzing data involving the duration between two events. It is also known as event history analysis, lifetime data analysis, reliability analysis or time to event analysis. One of the difficulties which arise in this area is the presence of censored data. The lifetime of an individual is censored when it cannot be exactly measured but partial information is available. Different circumstances can produce different types of censoring. The two most common censoring schemes used in life testing experiments are Type-I and Type-II censoring schemes. Hybrid censoring scheme is mixture of Type-I and Type-II censoring scheme. In this paper we consider the estimation of parameters of Lomax distribution based on hybrid censored data. The parameters are estimated by the maximum likelihood and Bayesian methods. The Fisher information matrix has been obtained and it can be used for constructing asymptotic confidence intervals.

  19. A Bayesian framework for parameter estimation in dynamical models.

    Directory of Open Access Journals (Sweden)

    Flávio Codeço Coelho

    Full Text Available Mathematical models in biology are powerful tools for the study and exploration of complex dynamics. Nevertheless, bringing theoretical results to an agreement with experimental observations involves acknowledging a great deal of uncertainty intrinsic to our theoretical representation of a real system. Proper handling of such uncertainties is key to the successful usage of models to predict experimental or field observations. This problem has been addressed over the years by many tools for model calibration and parameter estimation. In this article we present a general framework for uncertainty analysis and parameter estimation that is designed to handle uncertainties associated with the modeling of dynamic biological systems while remaining agnostic as to the type of model used. We apply the framework to fit an SIR-like influenza transmission model to 7 years of incidence data in three European countries: Belgium, the Netherlands and Portugal.

  20. Estimating parameters of chaotic systems synchronized by external driving signal

    International Nuclear Information System (INIS)

    Wu Xiaogang; Wang Zuxi

    2007-01-01

    Noise-induced synchronization (NIS) has evoked great research interests recently. Two uncoupled identical chaotic systems can achieve complete synchronization (CS) by feeding a common noise with appropriate intensity. Actually, NIS belongs to the category of external feedback control (EFC). The significance of applying EFC in secure communication lies in fact that the trajectory of chaotic systems is disturbed so strongly by external driving signal that phase space reconstruction attack fails. In this paper, however, we propose an approach that can accurately estimate the parameters of the chaotic systems synchronized by external driving signal through chaotic transmitted signal, driving signal and their derivatives. Numerical simulation indicates that this approach can estimate system parameters and external coupling strength under two driving modes in a very rapid manner, which implies that EFC is not superior to other methods in secure communication

  1. On Using Exponential Parameter Estimators with an Adaptive Controller

    Science.gov (United States)

    Patre, Parag; Joshi, Suresh M.

    2011-01-01

    Typical adaptive controllers are restricted to using a specific update law to generate parameter estimates. This paper investigates the possibility of using any exponential parameter estimator with an adaptive controller such that the system tracks a desired trajectory. The goal is to provide flexibility in choosing any update law suitable for a given application. The development relies on a previously developed concept of controller/update law modularity in the adaptive control literature, and the use of a converse Lyapunov-like theorem. Stability analysis is presented to derive gain conditions under which this is possible, and inferences are made about the tracking error performance. The development is based on a class of Euler-Lagrange systems that are used to model various engineering systems including space robots and manipulators.

  2. Basic Earth's Parameters as estimated from VLBI observations

    Directory of Open Access Journals (Sweden)

    Ping Zhu

    2017-11-01

    Full Text Available The global Very Long Baseline Interferometry observation for measuring the Earth rotation's parameters was launched around 1970s. Since then the precision of the measurements is continuously improving by taking into account various instrumental and environmental effects. The MHB2000 nutation model was introduced in 2002, which is constructed based on a revised nutation series derived from 20 years VLBI observations (1980–1999. In this work, we firstly estimated the amplitudes of all nutation terms from the IERS-EOP-C04 VLBI global solutions w.r.t. IAU1980, then we further inferred the BEPs (Basic Earth's Parameters by fitting the major nutation terms. Meanwhile, the BEPs were obtained from the same nutation time series using a BI (Bayesian Inversion. The corrections to the precession rate and the estimated BEPs are in an agreement, independent of which methods have been applied.

  3. Estimation of parameters of interior permanent magnet synchronous motors

    International Nuclear Information System (INIS)

    Hwang, C.C.; Chang, S.M.; Pan, C.T.; Chang, T.Y.

    2002-01-01

    This paper presents a magnetic circuit model to the estimation of machine parameters of an interior permanent magnet synchronous machine. It extends the earlier work of Hwang and Cho that focused mainly on the magnetic aspects of motor design. The proposed model used to calculate EMF, d- and q-axis reactances. These calculations are compared to those from finite element analysis and measurement with good agreement

  4. Estimation of Kinetic Parameters in an Automotive SCR Catalyst Model

    DEFF Research Database (Denmark)

    Åberg, Andreas; Widd, Anders; Abildskov, Jens

    2016-01-01

    be used directly for accurate full-scale transient simulations. The model was validated against full-scale data with an engine following the European Transient Cycle. The validation showed that the predictive capability for nitrogen oxides (NOx) was satisfactory. After re-estimation of the adsorption...... and desorption parameters with full-scale transient data, the fit for both NOx and NH3-slip was satisfactory....

  5. Fundamental limits of radio interferometers: calibration and source parameter estimation

    OpenAIRE

    Trott, Cathryn M.; Wayth, Randall B.; Tingay, Steven J.

    2012-01-01

    We use information theory to derive fundamental limits on the capacity to calibrate next-generation radio interferometers, and measure parameters of point sources for instrument calibration, point source subtraction, and data deconvolution. We demonstrate the implications of these fundamental limits, with particular reference to estimation of the 21cm Epoch of Reionization power spectrum with next-generation low-frequency instruments (e.g., the Murchison Widefield Array -- MWA, Precision Arra...

  6. Robust estimation of track parameters in wire chambers

    International Nuclear Information System (INIS)

    Bogdanova, N.B.; Bourilkov, D.T.

    1988-01-01

    The aim of this paper is to compare numerically the possibilities of the least square fit (LSF) and robust methods for modelled and real track data to determine the linear regression parameters of charged particles in wire chambers. It is shown, that Tukey robust estimate is superior to more standard (versions of LSF) methods. The efficiency of the method is illustrated by tables and figures for some important physical characteristics

  7. Factorized Estimation of Partially Shared Parameters in Diffusion Networks

    Czech Academy of Sciences Publication Activity Database

    Dedecius, Kamil; Sečkárová, Vladimíra

    2017-01-01

    Roč. 65, č. 19 (2017), s. 5153-5163 ISSN 1053-587X R&D Projects: GA ČR(CZ) GP14-06678P; GA ČR GA16-09848S Institutional support: RVO:67985556 Keywords : Diffusion network * Diffusion estimation * Heterogeneous parameters * Multitask networks Subject RIV: BD - Theory of Information OBOR OECD: Applied mathematics Impact factor: 4.300, year: 2016 http://library.utia.cas.cz/separaty/2017/AS/dedecius-0477044.pdf

  8. Estimation of parameters of interior permanent magnet synchronous motors

    CERN Document Server

    Hwang, C C; Pan, C T; Chang, T Y

    2002-01-01

    This paper presents a magnetic circuit model to the estimation of machine parameters of an interior permanent magnet synchronous machine. It extends the earlier work of Hwang and Cho that focused mainly on the magnetic aspects of motor design. The proposed model used to calculate EMF, d- and q-axis reactances. These calculations are compared to those from finite element analysis and measurement with good agreement.

  9. CTER-rapid estimation of CTF parameters with error assessment.

    Science.gov (United States)

    Penczek, Pawel A; Fang, Jia; Li, Xueming; Cheng, Yifan; Loerke, Justus; Spahn, Christian M T

    2014-05-01

    In structural electron microscopy, the accurate estimation of the Contrast Transfer Function (CTF) parameters, particularly defocus and astigmatism, is of utmost importance for both initial evaluation of micrograph quality and for subsequent structure determination. Due to increases in the rate of data collection on modern microscopes equipped with new generation cameras, it is also important that the CTF estimation can be done rapidly and with minimal user intervention. Finally, in order to minimize the necessity for manual screening of the micrographs by a user it is necessary to provide an assessment of the errors of fitted parameters values. In this work we introduce CTER, a CTF parameters estimation method distinguished by its computational efficiency. The efficiency of the method makes it suitable for high-throughput EM data collection, and enables the use of a statistical resampling technique, bootstrap, that yields standard deviations of estimated defocus and astigmatism amplitude and angle, thus facilitating the automation of the process of screening out inferior micrograph data. Furthermore, CTER also outputs the spatial frequency limit imposed by reciprocal space aliasing of the discrete form of the CTF and the finite window size. We demonstrate the efficiency and accuracy of CTER using a data set collected on a 300kV Tecnai Polara (FEI) using the K2 Summit DED camera in super-resolution counting mode. Using CTER we obtained a structure of the 80S ribosome whose large subunit had a resolution of 4.03Å without, and 3.85Å with, inclusion of astigmatism parameters. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Estimation of solid earth tidal parameters and FCN with VLBI

    International Nuclear Information System (INIS)

    Krásná, H.

    2012-01-01

    Measurements of a space-geodetic technique VLBI (Very Long Baseline Interferometry) are influenced by a variety of processes which have to be modelled and put as a priori information into the analysis of the space-geodetic data. The increasing accuracy of the VLBI measurements allows access to these parameters and provides possibilities to validate them directly from the measured data. The gravitational attraction of the Moon and the Sun causes deformation of the Earth's surface which can reach several decimetres in radial direction during a day. The displacement is a function of the so-called Love and Shida numbers. Due to the present accuracy of the VLBI measurements the parameters have to be specified as complex numbers, where the imaginary parts describe the anelasticity of the Earth's mantle. Moreover, it is necessary to distinguish between the single tides within the various frequency bands. In this thesis, complex Love and Shida numbers of twelve diurnal and five long-period tides included in the solid Earth tidal displacement modelling are estimated directly from the 27 years of VLBI measurements (1984.0 - 2011.0). In this work, the period of the Free Core Nutation (FCN) is estimated which shows up in the frequency dependent solid Earth tidal displacement as well as in a nutation model describing the motion of the Earth's axis in space. The FCN period in both models is treated as a single parameter and it is estimated in a rigorous global adjustment of the VLBI data. The obtained value of -431.18 ± 0.10 sidereal days differs slightly from the conventional value -431.39 sidereal days given in IERS Conventions 2010. An empirical FCN model based on variable amplitude and phase is determined, whose parameters are estimated in yearly steps directly within VLBI global solutions. (author) [de

  11. Estimating cellular parameters through optimization procedures: elementary principles and applications

    Directory of Open Access Journals (Sweden)

    Akatsuki eKimura

    2015-03-01

    Full Text Available Construction of quantitative models is a primary goal of quantitative biology, which aims to understand cellular and organismal phenomena in a quantitative manner. In this article, we introduce optimization procedures to search for parameters in a quantitative model that can reproduce experimental data. The aim of optimization is to minimize the sum of squared errors (SSE in a prediction or to maximize likelihood. A (local maximum of likelihood or (local minimum of the SSE can efficiently be identified using gradient approaches. Addition of a stochastic process enables us to identify the global maximum/minimum without becoming trapped in local maxima/minima. Sampling approaches take advantage of increasing computational power to test numerous sets of parameters in order to determine the optimum set. By combining Bayesian inference with gradient or sampling approaches, we can estimate both the optimum parameters and the form of the likelihood function related to the parameters. Finally, we introduce four examples of research that utilize parameter optimization to obtain biological insights from quantified data: transcriptional regulation, bacterial chemotaxis, morphogenesis, and cell cycle regulation. With practical knowledge of parameter optimization, cell and developmental biologists can develop realistic models that reproduce their observations and thus, obtain mechanistic insights into phenomena of interest.

  12. Model parameters estimation and sensitivity by genetic algorithms

    International Nuclear Information System (INIS)

    Marseguerra, Marzio; Zio, Enrico; Podofillini, Luca

    2003-01-01

    In this paper we illustrate the possibility of extracting qualitative information on the importance of the parameters of a model in the course of a Genetic Algorithms (GAs) optimization procedure for the estimation of such parameters. The Genetic Algorithms' search of the optimal solution is performed according to procedures that resemble those of natural selection and genetics: an initial population of alternative solutions evolves within the search space through the four fundamental operations of parent selection, crossover, replacement, and mutation. During the search, the algorithm examines a large amount of solution points which possibly carries relevant information on the underlying model characteristics. A possible utilization of this information amounts to create and update an archive with the set of best solutions found at each generation and then to analyze the evolution of the statistics of the archive along the successive generations. From this analysis one can retrieve information regarding the speed of convergence and stabilization of the different control (decision) variables of the optimization problem. In this work we analyze the evolution strategy followed by a GA in its search for the optimal solution with the aim of extracting information on the importance of the control (decision) variables of the optimization with respect to the sensitivity of the objective function. The study refers to a GA search for optimal estimates of the effective parameters in a lumped nuclear reactor model of literature. The supporting observation is that, as most optimization procedures do, the GA search evolves towards convergence in such a way to stabilize first the most important parameters of the model and later those which influence little the model outputs. In this sense, besides estimating efficiently the parameters values, the optimization approach also allows us to provide a qualitative ranking of their importance in contributing to the model output. The

  13. Orbital parameters of the multiple system EM Boo

    Science.gov (United States)

    Özkardeş, B.; Bakış, H.; Bakış, V.

    2018-02-01

    EM Boo is a relatively bright (V = 8.98 mag.) and short orbital period (P⁓2.45 days) binary star member of the multiple system WDS J14485+2445AB. There is neither photometric nor spectroscopic study of the system in the literature. In this work, we obtained spectroscopic orbital parameters of the system from new high resolution spectroscopic observations made with échelle spectrograph attached to UBT60 telescope of Akdeniz University. The spectroscopic solution yielded the values K1 = 100.7±2.6 km/s, K2 = 120.1±2.6 km/s and Vγ = -14.6±3.1 km/s, and thus the mass ratio of the system q = 0.838±0.064.

  14. Applicability of genetic algorithms to parameter estimation of economic models

    Directory of Open Access Journals (Sweden)

    Marcel Ševela

    2004-01-01

    Full Text Available The paper concentrates on capability of genetic algorithms for parameter estimation of non-linear economic models. In the paper we test the ability of genetic algorithms to estimate of parameters of demand function for durable goods and simultaneously search for parameters of genetic algorithm that lead to maximum effectiveness of the computation algorithm. The genetic algorithms connect deterministic iterative computation methods with stochastic methods. In the genteic aůgorithm approach each possible solution is represented by one individual, those life and lifes of all generations of individuals run under a few parameter of genetic algorithm. Our simulations resulted in optimal mutation rate of 15% of all bits in chromosomes, optimal elitism rate 20%. We can not set the optimal extend of generation, because it proves positive correlation with effectiveness of genetic algorithm in all range under research, but its impact is degreasing. The used genetic algorithm was sensitive to mutation rate at most, than to extend of generation. The sensitivity to elitism rate is not so strong.

  15. Comparison of sampling techniques for Bayesian parameter estimation

    Science.gov (United States)

    Allison, Rupert; Dunkley, Joanna

    2014-02-01

    The posterior probability distribution for a set of model parameters encodes all that the data have to tell us in the context of a given model; it is the fundamental quantity for Bayesian parameter estimation. In order to infer the posterior probability distribution we have to decide how to explore parameter space. Here we compare three prescriptions for how parameter space is navigated, discussing their relative merits. We consider Metropolis-Hasting sampling, nested sampling and affine-invariant ensemble Markov chain Monte Carlo (MCMC) sampling. We focus on their performance on toy-model Gaussian likelihoods and on a real-world cosmological data set. We outline the sampling algorithms themselves and elaborate on performance diagnostics such as convergence time, scope for parallelization, dimensional scaling, requisite tunings and suitability for non-Gaussian distributions. We find that nested sampling delivers high-fidelity estimates for posterior statistics at low computational cost, and should be adopted in favour of Metropolis-Hastings in many cases. Affine-invariant MCMC is competitive when computing clusters can be utilized for massive parallelization. Affine-invariant MCMC and existing extensions to nested sampling naturally probe multimodal and curving distributions.

  16. Automatic estimation of elasticity parameters in breast tissue

    Science.gov (United States)

    Skerl, Katrin; Cochran, Sandy; Evans, Andrew

    2014-03-01

    Shear wave elastography (SWE), a novel ultrasound imaging technique, can provide unique information about cancerous tissue. To estimate elasticity parameters, a region of interest (ROI) is manually positioned over the stiffest part of the shear wave image (SWI). The aim of this work is to estimate the elasticity parameters i.e. mean elasticity, maximal elasticity and standard deviation, fully automatically. Ultrasonic SWI of a breast elastography phantom and breast tissue in vivo were acquired using the Aixplorer system (SuperSonic Imagine, Aix-en-Provence, France). First, the SWI within the ultrasonic B-mode image was detected using MATLAB then the elasticity values were extracted. The ROI was automatically positioned over the stiffest part of the SWI and the elasticity parameters were calculated. Finally all values were saved in a spreadsheet which also contains the patient's study ID. This spreadsheet is easily available for physicians and clinical staff for further evaluation and so increase efficiency. Therewith the efficiency is increased. This algorithm simplifies the handling, especially for the performance and evaluation of clinical trials. The SWE processing method allows physicians easy access to the elasticity parameters of the examinations from their own and other institutions. This reduces clinical time and effort and simplifies evaluation of data in clinical trials. Furthermore, reproducibility will be improved.

  17. Rapid estimation of high-parameter auditory-filter shapes

    Science.gov (United States)

    Shen, Yi; Sivakumar, Rajeswari; Richards, Virginia M.

    2014-01-01

    A Bayesian adaptive procedure, the quick-auditory-filter (qAF) procedure, was used to estimate auditory-filter shapes that were asymmetric about their peaks. In three experiments, listeners who were naive to psychoacoustic experiments detected a fixed-level, pure-tone target presented with a spectrally notched noise masker. The qAF procedure adaptively manipulated the masker spectrum level and the position of the masker notch, which was optimized for the efficient estimation of the five parameters of an auditory-filter model. Experiment I demonstrated that the qAF procedure provided a convergent estimate of the auditory-filter shape at 2 kHz within 150 to 200 trials (approximately 15 min to complete) and, for a majority of listeners, excellent test-retest reliability. In experiment II, asymmetric auditory filters were estimated for target frequencies of 1 and 4 kHz and target levels of 30 and 50 dB sound pressure level. The estimated filter shapes were generally consistent with published norms, especially at the low target level. It is known that the auditory-filter estimates are narrower for forward masking than simultaneous masking due to peripheral suppression, a result replicated in experiment III using fewer than 200 qAF trials. PMID:25324086

  18. Noninvasive optical monitoring multiple physiological parameters response to cytokine storm

    Science.gov (United States)

    Li, Zebin; Li, Ting

    2018-02-01

    Cancer and other disease originated by immune or genetic problems have become a main cause of death. Gene/cell therapy is a highlighted potential method for the treatment of these diseases. However, during the treatment, it always causes cytokine storm, which probably trigger acute respiratory distress syndrome and multiple organ failure. Here we developed a point-of-care device for noninvasive monitoring cytokine storm induced multiple physiological parameters simultaneously. Oxy-hemoglobin, deoxy-hemoglobin, water concentration and deep-tissue/tumor temperature variations were simultaneously measured by extended near infrared spectroscopy. Detection algorithms of symptoms such as shock, edema, deep-tissue fever and tissue fibrosis were developed and included. Based on these measurements, modeling of patient tolerance and cytokine storm intensity were carried out. This custom device was tested on patients experiencing cytokine storm in intensive care unit. The preliminary data indicated the potential of our device in popular and milestone gene/cell therapy, especially, chimeric antigen receptor T-cell immunotherapy (CAR-T).

  19. Basic MR sequence parameters systematically bias automated brain volume estimation

    International Nuclear Information System (INIS)

    Haller, Sven; Falkovskiy, Pavel; Roche, Alexis; Marechal, Benedicte; Meuli, Reto; Thiran, Jean-Philippe; Krueger, Gunnar; Lovblad, Karl-Olof; Kober, Tobias

    2016-01-01

    Automated brain MRI morphometry, including hippocampal volumetry for Alzheimer disease, is increasingly recognized as a biomarker. Consequently, a rapidly increasing number of software tools have become available. We tested whether modifications of simple MR protocol parameters typically used in clinical routine systematically bias automated brain MRI segmentation results. The study was approved by the local ethical committee and included 20 consecutive patients (13 females, mean age 75.8 ± 13.8 years) undergoing clinical brain MRI at 1.5 T for workup of cognitive decline. We compared three 3D T1 magnetization prepared rapid gradient echo (MPRAGE) sequences with the following parameter settings: ADNI-2 1.2 mm iso-voxel, no image filtering, LOCAL- 1.0 mm iso-voxel no image filtering, LOCAL+ 1.0 mm iso-voxel with image edge enhancement. Brain segmentation was performed by two different and established analysis tools, FreeSurfer and MorphoBox, using standard parameters. Spatial resolution (1.0 versus 1.2 mm iso-voxel) and modification in contrast resulted in relative estimated volume difference of up to 4.28 % (p < 0.001) in cortical gray matter and 4.16 % (p < 0.01) in hippocampus. Image data filtering resulted in estimated volume difference of up to 5.48 % (p < 0.05) in cortical gray matter. A simple change of MR parameters, notably spatial resolution, contrast, and filtering, may systematically bias results of automated brain MRI morphometry of up to 4-5 %. This is in the same range as early disease-related brain volume alterations, for example, in Alzheimer disease. Automated brain segmentation software packages should therefore require strict MR parameter selection or include compensatory algorithms to avoid MR parameter-related bias of brain morphometry results. (orig.)

  20. Impact of relativistic effects on cosmological parameter estimation

    Science.gov (United States)

    Lorenz, Christiane S.; Alonso, David; Ferreira, Pedro G.

    2018-01-01

    Future surveys will access large volumes of space and hence very long wavelength fluctuations of the matter density and gravitational field. It has been argued that the set of secondary effects that affect the galaxy distribution, relativistic in nature, will bring new, complementary cosmological constraints. We study this claim in detail by focusing on a subset of wide-area future surveys: Stage-4 cosmic microwave background experiments and photometric redshift surveys. In particular, we look at the magnification lensing contribution to galaxy clustering and general-relativistic corrections to all observables. We quantify the amount of information encoded in these effects in terms of the tightening of the final cosmological constraints as well as the potential bias in inferred parameters associated with neglecting them. We do so for a wide range of cosmological parameters, covering neutrino masses, standard dark-energy parametrizations and scalar-tensor gravity theories. Our results show that, while the effect of lensing magnification to number counts does not contain a significant amount of information when galaxy clustering is combined with cosmic shear measurements, this contribution does play a significant role in biasing estimates on a host of parameter families if unaccounted for. Since the amplitude of the magnification term is controlled by the slope of the source number counts with apparent magnitude, s (z ), we also estimate the accuracy to which this quantity must be known to avoid systematic parameter biases, finding that future surveys will need to determine s (z ) to the ˜5 %- 10 % level. On the contrary, large-scale general-relativistic corrections are irrelevant both in terms of information content and parameter bias for most cosmological parameters but significant for the level of primordial non-Gaussianity.

  1. Basic MR sequence parameters systematically bias automated brain volume estimation

    Energy Technology Data Exchange (ETDEWEB)

    Haller, Sven [University of Geneva, Faculty of Medicine, Geneva (Switzerland); Affidea Centre de Diagnostique Radiologique de Carouge CDRC, Geneva (Switzerland); Falkovskiy, Pavel; Roche, Alexis; Marechal, Benedicte [Siemens Healthcare HC CEMEA SUI DI BM PI, Advanced Clinical Imaging Technology, Lausanne (Switzerland); University Hospital (CHUV), Department of Radiology, Lausanne (Switzerland); Meuli, Reto [University Hospital (CHUV), Department of Radiology, Lausanne (Switzerland); Thiran, Jean-Philippe [LTS5, Ecole Polytechnique Federale de Lausanne, Lausanne (Switzerland); Krueger, Gunnar [Siemens Medical Solutions USA, Inc., Boston, MA (United States); Lovblad, Karl-Olof [University of Geneva, Faculty of Medicine, Geneva (Switzerland); University Hospitals of Geneva, Geneva (Switzerland); Kober, Tobias [Siemens Healthcare HC CEMEA SUI DI BM PI, Advanced Clinical Imaging Technology, Lausanne (Switzerland); LTS5, Ecole Polytechnique Federale de Lausanne, Lausanne (Switzerland)

    2016-11-15

    Automated brain MRI morphometry, including hippocampal volumetry for Alzheimer disease, is increasingly recognized as a biomarker. Consequently, a rapidly increasing number of software tools have become available. We tested whether modifications of simple MR protocol parameters typically used in clinical routine systematically bias automated brain MRI segmentation results. The study was approved by the local ethical committee and included 20 consecutive patients (13 females, mean age 75.8 ± 13.8 years) undergoing clinical brain MRI at 1.5 T for workup of cognitive decline. We compared three 3D T1 magnetization prepared rapid gradient echo (MPRAGE) sequences with the following parameter settings: ADNI-2 1.2 mm iso-voxel, no image filtering, LOCAL- 1.0 mm iso-voxel no image filtering, LOCAL+ 1.0 mm iso-voxel with image edge enhancement. Brain segmentation was performed by two different and established analysis tools, FreeSurfer and MorphoBox, using standard parameters. Spatial resolution (1.0 versus 1.2 mm iso-voxel) and modification in contrast resulted in relative estimated volume difference of up to 4.28 % (p < 0.001) in cortical gray matter and 4.16 % (p < 0.01) in hippocampus. Image data filtering resulted in estimated volume difference of up to 5.48 % (p < 0.05) in cortical gray matter. A simple change of MR parameters, notably spatial resolution, contrast, and filtering, may systematically bias results of automated brain MRI morphometry of up to 4-5 %. This is in the same range as early disease-related brain volume alterations, for example, in Alzheimer disease. Automated brain segmentation software packages should therefore require strict MR parameter selection or include compensatory algorithms to avoid MR parameter-related bias of brain morphometry results. (orig.)

  2. Chloramine demand estimation using surrogate chemical and microbiological parameters.

    Science.gov (United States)

    Moradi, Sina; Liu, Sanly; Chow, Christopher W K; van Leeuwen, John; Cook, David; Drikas, Mary; Amal, Rose

    2017-07-01

    A model is developed to enable estimation of chloramine demand in full scale drinking water supplies based on chemical and microbiological factors that affect chloramine decay rate via nonlinear regression analysis method. The model is based on organic character (specific ultraviolet absorbance (SUVA)) of the water samples and a laboratory measure of the microbiological (F m ) decay of chloramine. The applicability of the model for estimation of chloramine residual (and hence chloramine demand) was tested on several waters from different water treatment plants in Australia through statistical test analysis between the experimental and predicted data. Results showed that the model was able to simulate and estimate chloramine demand at various times in real drinking water systems. To elucidate the loss of chloramine over the wide variation of water quality used in this study, the model incorporates both the fast and slow chloramine decay pathways. The significance of estimated fast and slow decay rate constants as the kinetic parameters of the model for three water sources in Australia was discussed. It was found that with the same water source, the kinetic parameters remain the same. This modelling approach has the potential to be used by water treatment operators as a decision support tool in order to manage chloramine disinfection. Copyright © 2017. Published by Elsevier B.V.

  3. Estimation of Snow Parameters from Dual-Wavelength Airborne Radar

    Science.gov (United States)

    Liao, Liang; Meneghini, Robert; Iguchi, Toshio; Detwiler, Andrew

    1997-01-01

    Estimation of snow characteristics from airborne radar measurements would complement In-situ measurements. While In-situ data provide more detailed information than radar, they are limited in their space-time sampling. In the absence of significant cloud water contents, dual-wavelength radar data can be used to estimate 2 parameters of a drop size distribution if the snow density is assumed. To estimate, rather than assume, a snow density is difficult, however, and represents a major limitation in the radar retrieval. There are a number of ways that this problem can be investigated: direct comparisons with in-situ measurements, examination of the large scale characteristics of the retrievals and their comparison to cloud model outputs, use of LDR measurements, and comparisons to the theoretical results of Passarelli(1978) and others. In this paper we address the first approach and, in part, the second.

  4. A parameter tree approach to estimating system sensitivities to parameter sets

    International Nuclear Information System (INIS)

    Jarzemba, M.S.; Sagar, B.

    2000-01-01

    A post-processing technique for determining relative system sensitivity to groups of parameters and system components is presented. It is assumed that an appropriate parametric model is used to simulate system behavior using Monte Carlo techniques and that a set of realizations of system output(s) is available. The objective of our technique is to analyze the input vectors and the corresponding output vectors (that is, post-process the results) to estimate the relative sensitivity of the output to input parameters (taken singly and as a group) and thereby rank them. This technique is different from the design of experimental techniques in that a partitioning of the parameter space is not required before the simulation. A tree structure (which looks similar to an event tree) is developed to better explain the technique. Each limb of the tree represents a particular combination of parameters or a combination of system components. For convenience and to distinguish it from the event tree, we call it the parameter tree. To construct the parameter tree, the samples of input parameter values are treated as either a '+' or a '-' based on whether or not the sampled parameter value is greater than or less than a specified branching criterion (e.g., mean, median, percentile of the population). The corresponding system outputs are also segregated into similar bins. Partitioning the first parameter into a '+' or a '-' bin creates the first level of the tree containing two branches. At the next level, realizations associated with each first-level branch are further partitioned into two bins using the branching criteria on the second parameter and so on until the tree is fully populated. Relative sensitivities are then inferred from the number of samples associated with each branch of the tree. The parameter tree approach is illustrated by applying it to a number of preliminary simulations of the proposed high-level radioactive waste repository at Yucca Mountain, NV. Using a

  5. Observations of multiple order parameters in 5f electron systems

    International Nuclear Information System (INIS)

    Blackburn, E.

    2005-12-01

    In this thesis, multiple order parameters originating in the same electronic system are studied. The multi-k magnetic structures, where more than one propagation wavevector, k, is observed in the same volume, are considered as prototypical models. The effect of this structure on the elastic and inelastic response is studied. In cubic 3-k uranium rock-salts, unexpected elastic diffraction events were observed at positions in reciprocal space where the structure factor should have been zero. These diffraction peaks are identified with correlations between the (orthogonal) magnetic order parameters. The 3-k structure also affects the observed dynamics; the spin-wave fluctuations in uranium dioxide as observed by inelastic neutron polarization analysis can only be explained on the basis of a 3-k structure. In the antiferromagnetic superconductor UPd 2 Al 3 the magnetic order and the super-conducting state coexist, and are apparently generated by the same heavy fermions. The effect of an external magnetic field on both the normal and superconducting states is examined. In the normal state, the compound displays Fermi-liquid-like behaviour. The inelastic neutron response is strongly renormalized on entering the superconducting state, and high-precision measurements of the low-energy transfer part of this response confirm that the superconducting energy gap has the same symmetry as the antiferromagnetic lattice. (author)

  6. A new method of hybrid frequency hopping signals selection and blind parameter estimation

    Science.gov (United States)

    Zeng, Xiaoyu; Jiao, Wencheng; Sun, Huixian

    2018-04-01

    Frequency hopping communication is widely used in military communications at home and abroad. In the case of single-channel reception, it is scarce to process multiple frequency hopping signals both effectively and simultaneously. A method of hybrid FH signals selection and blind parameter estimation is proposed. The method makes use of spectral transformation, spectral entropy calculation and PRI transformation basic theory to realize the sorting and parameter estimation of the components in the hybrid frequency hopping signal. The simulation results show that this method can correctly classify the frequency hopping component signal, and the estimated error of the frequency hopping period is about 5% and the estimated error of the frequency hopping frequency is less than 1% when the SNR is 10dB. However, the performance of this method deteriorates seriously at low SNR.

  7. Estimating unknown parameters in haemophilia using expert judgement elicitation.

    Science.gov (United States)

    Fischer, K; Lewandowski, D; Janssen, M P

    2013-09-01

    The increasing attention to healthcare costs and treatment efficiency has led to an increasing demand for quantitative data concerning patient and treatment characteristics in haemophilia. However, most of these data are difficult to obtain. The aim of this study was to use expert judgement elicitation (EJE) to estimate currently unavailable key parameters for treatment models in severe haemophilia A. Using a formal expert elicitation procedure, 19 international experts provided information on (i) natural bleeding frequency according to age and onset of bleeding, (ii) treatment of bleeds, (iii) time needed to control bleeding after starting secondary prophylaxis, (iv) dose requirements for secondary prophylaxis according to onset of bleeding, and (v) life-expectancy. For each parameter experts provided their quantitative estimates (median, P10, P90), which were combined using a graphical method. In addition, information was obtained concerning key decision parameters of haemophilia treatment. There was most agreement between experts regarding bleeding frequencies for patients treated on demand with an average onset of joint bleeding (1.7 years): median 12 joint bleeds per year (95% confidence interval 0.9-36) for patients ≤ 18, and 11 (0.8-61) for adult patients. Less agreement was observed concerning estimated effective dose for secondary prophylaxis in adults: median 2000 IU every other day The majority (63%) of experts expected that a single minor joint bleed could cause irreversible damage, and would accept up to three minor joint bleeds or one trauma related joint bleed annually on prophylaxis. Expert judgement elicitation allowed structured capturing of quantitative expert estimates. It generated novel data to be used in computer modelling, clinical care, and trial design. © 2013 John Wiley & Sons Ltd.

  8. NEWBOX: A computer program for parameter estimation in diffusion problems

    International Nuclear Information System (INIS)

    Nestor, C.W. Jr.; Godbee, H.W.; Joy, D.S.

    1989-01-01

    In the analysis of experiments to determine amounts of material transferred form 1 medium to another (e.g., the escape of chemically hazardous and radioactive materials from solids), there are at least 3 important considerations. These are (1) is the transport amenable to treatment by established mass transport theory; (2) do methods exist to find estimates of the parameters which will give a best fit, in some sense, to the experimental data; and (3) what computational procedures are available for evaluating the theoretical expressions. The authors have made the assumption that established mass transport theory is an adequate model for the situations under study. Since the solutions of the diffusion equation are usually nonlinear in some parameters (diffusion coefficient, reaction rate constants, etc.), use of a method of parameter adjustment involving first partial derivatives can be complicated and prone to errors in the computation of the derivatives. In addition, the parameters must satisfy certain constraints; for example, the diffusion coefficient must remain positive. For these reasons, a variant of the constrained simplex method of M. J. Box has been used to estimate parameters. It is similar, but not identical, to the downhill simplex method of Nelder and Mead. In general, they calculate the fraction of material transferred as a function of time from expressions obtained by the inversion of the Laplace transform of the fraction transferred, rather than by taking derivatives of a calculated concentration profile. With the above approaches to the 3 considerations listed at the outset, they developed a computer program NEWBOX, usable on a personal computer, to calculate the fractional release of material from 4 different geometrical shapes (semi-infinite medium, finite slab, finite circular cylinder, and sphere), accounting for several different boundary conditions

  9. Target parameter estimation for spatial and temporal formulations in MIMO radars using compressive sensing

    KAUST Repository

    Ali, Hussain; Ahmed, Sajid; Al-Naffouri, Tareq Y.; Sharawi, Mohammad S.; Alouini, Mohamed-Slim

    2017-01-01

    Conventional algorithms used for parameter estimation in colocated multiple-input-multiple-output (MIMO) radars require the inversion of the covariance matrix of the received spatial samples. In these algorithms, the number of received snapshots should be at least equal to the size of the covariance matrix. For large size MIMO antenna arrays, the inversion of the covariance matrix becomes computationally very expensive. Compressive sensing (CS) algorithms which do not require the inversion of the complete covariance matrix can be used for parameter estimation with fewer number of received snapshots. In this work, it is shown that the spatial formulation is best suitable for large MIMO arrays when CS algorithms are used. A temporal formulation is proposed which fits the CS algorithms framework, especially for small size MIMO arrays. A recently proposed low-complexity CS algorithm named support agnostic Bayesian matching pursuit (SABMP) is used to estimate target parameters for both spatial and temporal formulations for the unknown number of targets. The simulation results show the advantage of SABMP algorithm utilizing low number of snapshots and better parameter estimation for both small and large number of antenna elements. Moreover, it is shown by simulations that SABMP is more effective than other existing algorithms at high signal-to-noise ratio.

  10. Target parameter estimation for spatial and temporal formulations in MIMO radars using compressive sensing

    KAUST Repository

    Ali, Hussain

    2017-01-09

    Conventional algorithms used for parameter estimation in colocated multiple-input-multiple-output (MIMO) radars require the inversion of the covariance matrix of the received spatial samples. In these algorithms, the number of received snapshots should be at least equal to the size of the covariance matrix. For large size MIMO antenna arrays, the inversion of the covariance matrix becomes computationally very expensive. Compressive sensing (CS) algorithms which do not require the inversion of the complete covariance matrix can be used for parameter estimation with fewer number of received snapshots. In this work, it is shown that the spatial formulation is best suitable for large MIMO arrays when CS algorithms are used. A temporal formulation is proposed which fits the CS algorithms framework, especially for small size MIMO arrays. A recently proposed low-complexity CS algorithm named support agnostic Bayesian matching pursuit (SABMP) is used to estimate target parameters for both spatial and temporal formulations for the unknown number of targets. The simulation results show the advantage of SABMP algorithm utilizing low number of snapshots and better parameter estimation for both small and large number of antenna elements. Moreover, it is shown by simulations that SABMP is more effective than other existing algorithms at high signal-to-noise ratio.

  11. Statistical estimation of ultrasonic propagation path parameters for aberration correction.

    Science.gov (United States)

    Waag, Robert C; Astheimer, Jeffrey P

    2005-05-01

    Parameters in a linear filter model for ultrasonic propagation are found using statistical estimation. The model uses an inhomogeneous-medium Green's function that is decomposed into a homogeneous-transmission term and a path-dependent aberration term. Power and cross-power spectra of random-medium scattering are estimated over the frequency band of the transmit-receive system by using closely situated scattering volumes. The frequency-domain magnitude of the aberration is obtained from a normalization of the power spectrum. The corresponding phase is reconstructed from cross-power spectra of subaperture signals at adjacent receive positions by a recursion. The subapertures constrain the receive sensitivity pattern to eliminate measurement system phase contributions. The recursion uses a Laplacian-based algorithm to obtain phase from phase differences. Pulse-echo waveforms were acquired from a point reflector and a tissue-like scattering phantom through a tissue-mimicking aberration path from neighboring volumes having essentially the same aberration path. Propagation path aberration parameters calculated from the measurements of random scattering through the aberration phantom agree with corresponding parameters calculated for the same aberrator and array position by using echoes from the point reflector. The results indicate the approach describes, in addition to time shifts, waveform amplitude and shape changes produced by propagation through distributed aberration under realistic conditions.

  12. PARAMETER ESTIMATION OF VALVE STICTION USING ANT COLONY OPTIMIZATION

    Directory of Open Access Journals (Sweden)

    S. Kalaivani

    2012-07-01

    Full Text Available In this paper, a procedure for quantifying valve stiction in control loops based on ant colony optimization has been proposed. Pneumatic control valves are widely used in the process industry. The control valve contains non-linearities such as stiction, backlash, and deadband that in turn cause oscillations in the process output. Stiction is one of the long-standing problems and it is the most severe problem in the control valves. Thus the measurement data from an oscillating control loop can be used as a possible diagnostic signal to provide an estimate of the stiction magnitude. Quantification of control valve stiction is still a challenging issue. Prior to doing stiction detection and quantification, it is necessary to choose a suitable model structure to describe control-valve stiction. To understand the stiction phenomenon, the Stenman model is used. Ant Colony Optimization (ACO, an intelligent swarm algorithm, proves effective in various fields. The ACO algorithm is inspired from the natural trail following behaviour of ants. The parameters of the Stenman model are estimated using ant colony optimization, from the input-output data by minimizing the error between the actual stiction model output and the simulated stiction model output. Using ant colony optimization, Stenman model with known nonlinear structure and unknown parameters can be estimated.

  13. Sensitivity and parameter-estimation precision for alternate LISA configurations

    International Nuclear Information System (INIS)

    Vallisneri, Michele; Crowder, Jeff; Tinto, Massimo

    2008-01-01

    We describe a simple framework to assess the LISA scientific performance (more specifically, its sensitivity and expected parameter-estimation precision for prescribed gravitational-wave signals) under the assumption of failure of one or two inter-spacecraft laser measurements (links) and of one to four intra-spacecraft laser measurements. We apply the framework to the simple case of measuring the LISA sensitivity to monochromatic circular binaries, and the LISA parameter-estimation precision for the gravitational-wave polarization angle of these systems. Compared to the six-link baseline configuration, the five-link case is characterized by a small loss in signal-to-noise ratio (SNR) in the high-frequency section of the LISA band; the four-link case shows a reduction by a factor of √2 at low frequencies, and by up to ∼2 at high frequencies. The uncertainty in the estimate of polarization, as computed in the Fisher-matrix formalism, also worsens when moving from six to five, and then to four links: this can be explained by the reduced SNR available in those configurations (except for observations shorter than three months, where five and six links do better than four even with the same SNR). In addition, we prove (for generic signals) that the SNR and Fisher matrix are invariant with respect to the choice of a basis of TDI observables; rather, they depend only on which inter-spacecraft and intra-spacecraft measurements are available

  14. Temporal Parameters Estimation for Wheelchair Propulsion Using Wearable Sensors

    Directory of Open Access Journals (Sweden)

    Manoela Ojeda

    2014-01-01

    Full Text Available Due to lower limb paralysis, individuals with spinal cord injury (SCI rely on their upper limbs for mobility. The prevalence of upper extremity pain and injury is high among this population. We evaluated the performance of three triaxis accelerometers placed on the upper arm, wrist, and under the wheelchair, to estimate temporal parameters of wheelchair propulsion. Twenty-six participants with SCI were asked to push their wheelchair equipped with a SMARTWheel. The estimated stroke number was compared with the criterion from video observations and the estimated push frequency was compared with the criterion from the SMARTWheel. Mean absolute errors (MAE and mean absolute percentage of error (MAPE were calculated. Intraclass correlation coefficients and Bland-Altman plots were used to assess the agreement. Results showed reasonable accuracies especially using the accelerometer placed on the upper arm where the MAPE was 8.0% for stroke number and 12.9% for push frequency. The ICC was 0.994 for stroke number and 0.916 for push frequency. The wrist and seat accelerometer showed lower accuracy with a MAPE for the stroke number of 10.8% and 13.4% and ICC of 0.990 and 0.984, respectively. Results suggested that accelerometers could be an option for monitoring temporal parameters of wheelchair propulsion.

  15. Maximum profile likelihood estimation of differential equation parameters through model based smoothing state estimates.

    Science.gov (United States)

    Campbell, D A; Chkrebtii, O

    2013-12-01

    Statistical inference for biochemical models often faces a variety of characteristic challenges. In this paper we examine state and parameter estimation for the JAK-STAT intracellular signalling mechanism, which exemplifies the implementation intricacies common in many biochemical inference problems. We introduce an extension to the Generalized Smoothing approach for estimating delay differential equation models, addressing selection of complexity parameters, choice of the basis system, and appropriate optimization strategies. Motivated by the JAK-STAT system, we further extend the generalized smoothing approach to consider a nonlinear observation process with additional unknown parameters, and highlight how the approach handles unobserved states and unevenly spaced observations. The methodology developed is generally applicable to problems of estimation for differential equation models with delays, unobserved states, nonlinear observation processes, and partially observed histories. Crown Copyright © 2013. Published by Elsevier Inc. All rights reserved.

  16. ESPRIT-like algorithm for computational-efficient angle estimation in bistatic multiple-input multiple-output radar

    Science.gov (United States)

    Gong, Jian; Lou, Shuntian; Guo, Yiduo

    2016-04-01

    An estimation of signal parameters via a rotational invariance techniques-like (ESPRIT-like) algorithm is proposed to estimate the direction of arrival and direction of departure for bistatic multiple-input multiple-output (MIMO) radar. The properties of a noncircular signal and Euler's formula are first exploited to establish a real-valued bistatic MIMO radar array data, which is composed of sine and cosine data. Then the receiving/transmitting selective matrices are constructed to obtain the receiving/transmitting rotational invariance factors. Since the rotational invariance factor is a cosine function, symmetrical mirror angle ambiguity may occur. Finally, a maximum likelihood function is used to avoid the estimation ambiguities. Compared with the existing ESPRIT, the proposed algorithm can save about 75% of computational load owing to the real-valued ESPRIT algorithm. Simulation results confirm the effectiveness of the ESPRIT-like algorithm.

  17. A method for model identification and parameter estimation

    International Nuclear Information System (INIS)

    Bambach, M; Heinkenschloss, M; Herty, M

    2013-01-01

    We propose and analyze a new method for the identification of a parameter-dependent model that best describes a given system. This problem arises, for example, in the mathematical modeling of material behavior where several competing constitutive equations are available to describe a given material. In this case, the models are differential equations that arise from the different constitutive equations, and the unknown parameters are coefficients in the constitutive equations. One has to determine the best-suited constitutive equations for a given material and application from experiments. We assume that the true model is one of the N possible parameter-dependent models. To identify the correct model and the corresponding parameters, we can perform experiments, where for each experiment we prescribe an input to the system and observe a part of the system state. Our approach consists of two stages. In the first stage, for each pair of models we determine the experiment, i.e. system input and observation, that best differentiates between the two models, and measure the distance between the two models. Then we conduct N(N − 1) or, depending on the approach taken, N(N − 1)/2 experiments and use the result of the experiments as well as the previously computed model distances to determine the true model. We provide sufficient conditions on the model distances and measurement errors which guarantee that our approach identifies the correct model. Given the model, we identify the corresponding model parameters in the second stage. The problem in the second stage is a standard parameter estimation problem and we use a method suitable for the given application. We illustrate our approach on three examples, including one where the models are elliptic partial differential equations with different parameterized right-hand sides and an example where we identify the constitutive equation in a problem from computational viscoplasticity. (paper)

  18. Simultaneous estimation of QTL parameters for mapping multiple traits

    Indian Academy of Sciences (India)

    LIANG TONG

    2018-03-13

    XM ji ) denotes the conditional probability of the QTL genotype. X. ∗ ji (QiQi, Qiqi or ... random error of the ith trait value of the jth subject, with mean zero and ..... to adjust the conditional probabilities in table 1 when conducting ...

  19. Estimating demographic parameters using a combination of known-fate and open N-mixture models.

    Science.gov (United States)

    Schmidt, Joshua H; Johnson, Devin S; Lindberg, Mark S; Adams, Layne G

    2015-10-01

    Accurate estimates of demographic parameters are required to infer appropriate ecological relationships and inform management actions. Known-fate data from marked individuals are commonly used to estimate survival rates, whereas N-mixture models use count data from unmarked individuals to estimate multiple demographic parameters. However, a joint approach combining the strengths of both analytical tools has not been developed. Here we develop an integrated model combining known-fate and open N-mixture models, allowing the estimation of detection probability, recruitment, and the joint estimation of survival. We demonstrate our approach through both simulations and an applied example using four years of known-fate and pack count data for wolves (Canis lupus). Simulation results indicated that the integrated model reliably recovered parameters with no evidence of bias, and survival estimates were more precise under the joint model. Results from the applied example indicated that the marked sample of wolves was biased toward individuals with higher apparent survival rates than the unmarked pack mates, suggesting that joint estimates may be more representative of the overall population. Our integrated model is a practical approach for reducing bias while increasing precision and the amount of information gained from mark-resight data sets. We provide implementations in both the BUGS language and an R package.

  20. Transport parameter estimation from lymph measurements and the Patlak equation.

    Science.gov (United States)

    Watson, P D; Wolf, M B

    1992-01-01

    Two methods of estimating protein transport parameters for plasma-to-lymph transport data are presented. Both use IBM-compatible computers to obtain least-squares parameters for the solvent drag reflection coefficient and the permeability-surface area product using the Patlak equation. A matrix search approach is described, and the speed and convenience of this are compared with a commercially available gradient method. The results from both of these methods were different from those of a method reported by Reed, Townsley, and Taylor [Am. J. Physiol. 257 (Heart Circ. Physiol. 26): H1037-H1041, 1989]. It is shown that the Reed et al. method contains a systematic error. It is also shown that diffusion always plays an important role for transmembrane transport at the exit end of a membrane channel under all conditions of lymph flow rate and that the statement that diffusion becomes zero at high lymph flow rate depends on a mathematical definition of diffusion.

  1. Synchronization and parameter estimations of an uncertain Rikitake system

    International Nuclear Information System (INIS)

    Aguilar-Ibanez, Carlos; Martinez-Guerra, Rafael; Aguilar-Lopez, Ricardo; Mata-Machuca, Juan L.

    2010-01-01

    In this Letter we address the synchronization and parameter estimation of the uncertain Rikitake system, under the assumption the state is partially known. To this end we use the master/slave scheme in conjunction with the adaptive control technique. Our control approach consists of proposing a slave system which has to follow asymptotically the uncertain Rikitake system, refereed as the master system. The gains of the slave system are adjusted continually according to a convenient adaptation control law, until the measurable output errors converge to zero. The convergence analysis is carried out by using the Barbalat's Lemma. Under this context, uncertainty means that although the system structure is known, only a partial knowledge of the corresponding parameter values is available.

  2. Customized Steady-State Constraints for Parameter Estimation in Non-Linear Ordinary Differential Equation Models.

    Science.gov (United States)

    Rosenblatt, Marcus; Timmer, Jens; Kaschek, Daniel

    2016-01-01

    Ordinary differential equation models have become a wide-spread approach to analyze dynamical systems and understand underlying mechanisms. Model parameters are often unknown and have to be estimated from experimental data, e.g., by maximum-likelihood estimation. In particular, models of biological systems contain a large number of parameters. To reduce the dimensionality of the parameter space, steady-state information is incorporated in the parameter estimation process. For non-linear models, analytical steady-state calculation typically leads to higher-order polynomial equations for which no closed-form solutions can be obtained. This can be circumvented by solving the steady-state equations for kinetic parameters, which results in a linear equation system with comparatively simple solutions. At the same time multiplicity of steady-state solutions is avoided, which otherwise is problematic for optimization. When solved for kinetic parameters, however, steady-state constraints tend to become negative for particular model specifications, thus, generating new types of optimization problems. Here, we present an algorithm based on graph theory that derives non-negative, analytical steady-state expressions by stepwise removal of cyclic dependencies between dynamical variables. The algorithm avoids multiple steady-state solutions by construction. We show that our method is applicable to most common classes of biochemical reaction networks containing inhibition terms, mass-action and Hill-type kinetic equations. Comparing the performance of parameter estimation for different analytical and numerical methods of incorporating steady-state information, we show that our approach is especially well-tailored to guarantee a high success rate of optimization.

  3. Multivariate phase type distributions - Applications and parameter estimation

    DEFF Research Database (Denmark)

    Meisch, David

    The best known univariate probability distribution is the normal distribution. It is used throughout the literature in a broad field of applications. In cases where it is not sensible to use the normal distribution alternative distributions are at hand and well understood, many of these belonging...... and statistical inference, is the multivariate normal distribution. Unfortunately only little is known about the general class of multivariate phase type distribution. Considering the results concerning parameter estimation and inference theory of univariate phase type distributions, the class of multivariate...... projects and depend on reliable cost estimates. The Successive Principle is a group analysis method primarily used for analyzing medium to large projects in relation to cost or duration. We believe that the mathematical modeling used in the Successive Principle can be improved. We suggested a novel...

  4. Energy parameter estimation in solar powered wireless sensor networks

    KAUST Repository

    Mousa, Mustafa

    2014-02-24

    The operation of solar powered wireless sensor networks is associated with numerous challenges. One of the main challenges is the high variability of solar power input and battery capacity, due to factors such as weather, humidity, dust and temperature. In this article, we propose a set of tools that can be implemented onboard high power wireless sensor networks to estimate the battery condition and capacity as well as solar power availability. These parameters are very important to optimize sensing and communications operations and maximize the reliability of the complete system. Experimental results show that the performance of typical Lithium Ion batteries severely degrades outdoors in a matter of weeks or months, and that the availability of solar energy in an urban solar powered wireless sensor network is highly variable, which underlines the need for such power and energy estimation algorithms.

  5. Estimation of Aircraft Nonlinear Unsteady Parameters From Wind Tunnel Data

    Science.gov (United States)

    Klein, Vladislav; Murphy, Patrick C.

    1998-01-01

    Aerodynamic equations were formulated for an aircraft in one-degree-of-freedom large amplitude motion about each of its body axes. The model formulation based on indicial functions separated the resulting aerodynamic forces and moments into static terms, purely rotary terms and unsteady terms. Model identification from experimental data combined stepwise regression and maximum likelihood estimation in a two-stage optimization algorithm that can identify the unsteady term and rotary term if necessary. The identification scheme was applied to oscillatory data in two examples. The model identified from experimental data fit the data well, however, some parameters were estimated with limited accuracy. The resulting model was a good predictor for oscillatory and ramp input data.

  6. Optimization-based particle filter for state and parameter estimation

    Institute of Scientific and Technical Information of China (English)

    Li Fu; Qi Fei; Shi Guangming; Zhang Li

    2009-01-01

    In recent years, the theory of particle filter has been developed and widely used for state and parameter estimation in nonlinear/non-Gaussian systems. Choosing good importance density is a critical issue in particle filter design. In order to improve the approximation of posterior distribution, this paper provides an optimization-based algorithm (the steepest descent method) to generate the proposal distribution and then sample particles from the distribution. This algorithm is applied in 1-D case, and the simulation results show that the proposed particle filter performs better than the extended Kalman filter (EKF), the standard particle filter (PF), the extended Kalman particle filter (PF-EKF) and the unscented particle filter (UPF) both in efficiency and in estimation precision.

  7. Energy parameter estimation in solar powered wireless sensor networks

    KAUST Repository

    Mousa, Mustafa; Claudel, Christian G.

    2014-01-01

    The operation of solar powered wireless sensor networks is associated with numerous challenges. One of the main challenges is the high variability of solar power input and battery capacity, due to factors such as weather, humidity, dust and temperature. In this article, we propose a set of tools that can be implemented onboard high power wireless sensor networks to estimate the battery condition and capacity as well as solar power availability. These parameters are very important to optimize sensing and communications operations and maximize the reliability of the complete system. Experimental results show that the performance of typical Lithium Ion batteries severely degrades outdoors in a matter of weeks or months, and that the availability of solar energy in an urban solar powered wireless sensor network is highly variable, which underlines the need for such power and energy estimation algorithms.

  8. Parameter Estimation Analysis for Hybrid Adaptive Fault Tolerant Control

    Science.gov (United States)

    Eshak, Peter B.

    Research efforts have increased in recent years toward the development of intelligent fault tolerant control laws, which are capable of helping the pilot to safely maintain aircraft control at post failure conditions. Researchers at West Virginia University (WVU) have been actively involved in the development of fault tolerant adaptive control laws in all three major categories: direct, indirect, and hybrid. The first implemented design to provide adaptation was a direct adaptive controller, which used artificial neural networks to generate augmentation commands in order to reduce the modeling error. Indirect adaptive laws were implemented in another controller, which utilized online PID to estimate and update the controller parameter. Finally, a new controller design was introduced, which integrated both direct and indirect control laws. This controller is known as hybrid adaptive controller. This last control design outperformed the two earlier designs in terms of less NNs effort and better tracking quality. The performance of online PID has an important role in the quality of the hybrid controller; therefore, the quality of the estimation will be of a great importance. Unfortunately, PID is not perfect and the online estimation process has some inherited issues; the online PID estimates are primarily affected by delays and biases. In order to ensure updating reliable estimates to the controller, the estimator consumes some time to converge. Moreover, the estimator will often converge to a biased value. This thesis conducts a sensitivity analysis for the estimation issues, delay and bias, and their effect on the tracking quality. In addition, the performance of the hybrid controller as compared to direct adaptive controller is explored. In order to serve this purpose, a simulation environment in MATLAB/SIMULINK has been created. The simulation environment is customized to provide the user with the flexibility to add different combinations of biases and delays to

  9. Estimation of modal parameters using bilinear joint time frequency distributions

    Science.gov (United States)

    Roshan-Ghias, A.; Shamsollahi, M. B.; Mobed, M.; Behzad, M.

    2007-07-01

    In this paper, a new method is proposed for modal parameter estimation using time-frequency representations. Smoothed Pseudo Wigner-Ville distribution which is a member of the Cohen's class distributions is used to decouple vibration modes completely in order to study each mode separately. This distribution reduces cross-terms which are troublesome in Wigner-Ville distribution and retains the resolution as well. The method was applied to highly damped systems, and results were superior to those obtained via other conventional methods.

  10. Parameter estimation of variable-parameter nonlinear Muskingum model using excel solver

    Science.gov (United States)

    Kang, Ling; Zhou, Liwei

    2018-02-01

    Abstract . The Muskingum model is an effective flood routing technology in hydrology and water resources Engineering. With the development of optimization technology, more and more variable-parameter Muskingum models were presented to improve effectiveness of the Muskingum model in recent decades. A variable-parameter nonlinear Muskingum model (NVPNLMM) was proposed in this paper. According to the results of two real and frequently-used case studies by various models, the NVPNLMM could obtain better values of evaluation criteria, which are used to describe the superiority of the estimated outflows and compare the accuracies of flood routing using various models, and the optimal estimated outflows by the NVPNLMM were closer to the observed outflows than the ones by other models.

  11. Iterative Pilot-Layer Aided Channel Estimation with Emphasis on Interleave-Division Multiple Access Systems

    Directory of Open Access Journals (Sweden)

    Schoeneich Hendrik

    2006-01-01

    Full Text Available Channel estimation schemes suitable for interleave-division multiple access (IDMA systems are presented. Training and data are superimposed. Training-based and semiblind linear channel estimators are derived and their performance is discussed and compared. Monte Carlo simulation results are presented showing that the derived channel estimators in conjunction with a superimposed pilot sequence and chip-by-chip processing are able to track fast-fading frequency-selective channels. As opposed to conventional channel estimation techniques, the BER performance even improves with increasing Doppler spread for typical system parameters. An error performance close to the case of perfect channel knowledge can be achieved with high power efficiency.

  12. Estimating the parameters of a generalized lambda distribution

    International Nuclear Information System (INIS)

    Fournier, B.; Rupin, N.; Najjar, D.; Iost, A.; Rupin, N.; Bigerelle, M.; Wilcox, R.; Fournier, B.

    2007-01-01

    The method of moments is a popular technique for estimating the parameters of a generalized lambda distribution (GLD), but published results suggest that the percentile method gives superior results. However, the percentile method cannot be implemented in an automatic fashion, and automatic methods, like the starship method, can lead to prohibitive execution time with large sample sizes. A new estimation method is proposed that is automatic (it does not require the use of special tables or graphs), and it reduces the computational time. Based partly on the usual percentile method, this new method also requires choosing which quantile u to use when fitting a GLD to data. The choice for u is studied and it is found that the best choice depends on the final goal of the modeling process. The sampling distribution of the new estimator is studied and compared to the sampling distribution of estimators that have been proposed. Naturally, all estimators are biased and here it is found that the bias becomes negligible with sample sizes n ≥ 2 * 10(3). The.025 and.975 quantiles of the sampling distribution are investigated, and the difference between these quantiles is found to decrease proportionally to 1/root n.. The same results hold for the moment and percentile estimates. Finally, the influence of the sample size is studied when a normal distribution is modeled by a GLD. Both bounded and unbounded GLDs are used and the bounded GLD turns out to be the most accurate. Indeed it is shown that, up to n = 10(6), bounded GLD modeling cannot be rejected by usual goodness-of-fit tests. (authors)

  13. Bayesian estimation of multicomponent relaxation parameters in magnetic resonance fingerprinting.

    Science.gov (United States)

    McGivney, Debra; Deshmane, Anagha; Jiang, Yun; Ma, Dan; Badve, Chaitra; Sloan, Andrew; Gulani, Vikas; Griswold, Mark

    2018-07-01

    To estimate multiple components within a single voxel in magnetic resonance fingerprinting when the number and types of tissues comprising the voxel are not known a priori. Multiple tissue components within a single voxel are potentially separable with magnetic resonance fingerprinting as a result of differences in signal evolutions of each component. The Bayesian framework for inverse problems provides a natural and flexible setting for solving this problem when the tissue composition per voxel is unknown. Assuming that only a few entries from the dictionary contribute to a mixed signal, sparsity-promoting priors can be placed upon the solution. An iterative algorithm is applied to compute the maximum a posteriori estimator of the posterior probability density to determine the magnetic resonance fingerprinting dictionary entries that contribute most significantly to mixed or pure voxels. Simulation results show that the algorithm is robust in finding the component tissues of mixed voxels. Preliminary in vivo data confirm this result, and show good agreement in voxels containing pure tissue. The Bayesian framework and algorithm shown provide accurate solutions for the partial-volume problem in magnetic resonance fingerprinting. The flexibility of the method will allow further study into different priors and hyperpriors that can be applied in the model. Magn Reson Med 80:159-170, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  14. Fast Estimation Method of Space-Time Two-Dimensional Positioning Parameters Based on Hadamard Product

    Directory of Open Access Journals (Sweden)

    Haiwen Li

    2018-01-01

    Full Text Available The estimation speed of positioning parameters determines the effectiveness of the positioning system. The time of arrival (TOA and direction of arrival (DOA parameters can be estimated by the space-time two-dimensional multiple signal classification (2D-MUSIC algorithm for array antenna. However, this algorithm needs much time to complete the two-dimensional pseudo spectral peak search, which makes it difficult to apply in practice. Aiming at solving this problem, a fast estimation method of space-time two-dimensional positioning parameters based on Hadamard product is proposed in orthogonal frequency division multiplexing (OFDM system, and the Cramer-Rao bound (CRB is also presented. Firstly, according to the channel frequency domain response vector of each array, the channel frequency domain estimation vector is constructed using the Hadamard product form containing location information. Then, the autocorrelation matrix of the channel response vector for the extended array element in frequency domain and the noise subspace are calculated successively. Finally, by combining the closed-form solution and parameter pairing, the fast joint estimation for time delay and arrival direction is accomplished. The theoretical analysis and simulation results show that the proposed algorithm can significantly reduce the computational complexity and guarantee that the estimation accuracy is not only better than estimating signal parameters via rotational invariance techniques (ESPRIT algorithm and 2D matrix pencil (MP algorithm but also close to 2D-MUSIC algorithm. Moreover, the proposed algorithm also has certain adaptability to multipath environment and effectively improves the ability of fast acquisition of location parameters.

  15. Optimal Input Design for Aircraft Parameter Estimation using Dynamic Programming Principles

    Science.gov (United States)

    Morelli, Eugene A.; Klein, Vladislav

    1990-01-01

    A new technique was developed for designing optimal flight test inputs for aircraft parameter estimation experiments. The principles of dynamic programming were used for the design in the time domain. This approach made it possible to include realistic practical constraints on the input and output variables. A description of the new approach is presented, followed by an example for a multiple input linear model describing the lateral dynamics of a fighter aircraft. The optimal input designs produced by the new technique demonstrated improved quality and expanded capability relative to the conventional multiple input design method.

  16. Full Body Pose Estimation During Occlusion using Multiple Cameras

    DEFF Research Database (Denmark)

    Fihl, Preben; Cosar, Serhan

    people is a very challenging problem for methods based on pictorials structure as for any other monocular pose estimation method. In this report we present work on a multi-view approach based on pictorial structures that integrate low level information from multiple calibrated cameras to improve the 2D...

  17. Bias correction for the least squares estimator of Weibull shape parameter with complete and censored data

    International Nuclear Information System (INIS)

    Zhang, L.F.; Xie, M.; Tang, L.C.

    2006-01-01

    Estimation of the Weibull shape parameter is important in reliability engineering. However, commonly used methods such as the maximum likelihood estimation (MLE) and the least squares estimation (LSE) are known to be biased. Bias correction methods for MLE have been studied in the literature. This paper investigates the methods for bias correction when model parameters are estimated with LSE based on probability plot. Weibull probability plot is very simple and commonly used by practitioners and hence such a study is useful. The bias of the LS shape parameter estimator for multiple censored data is also examined. It is found that the bias can be modeled as the function of the sample size and the censoring level, and is mainly dependent on the latter. A simple bias function is introduced and bias correcting formulas are proposed for both complete and censored data. Simulation results are also presented. The bias correction methods proposed are very easy to use and they can typically reduce the bias of the LSE of the shape parameter to less than half percent

  18. Analytic continuation by duality estimation of the S parameter

    International Nuclear Information System (INIS)

    Ignjatovic, S. R.; Wijewardhana, L. C. R.; Takeuchi, T.

    2000-01-01

    We investigate the reliability of the analytic continuation by duality (ACD) technique in estimating the electroweak S parameter for technicolor theories. The ACD technique, which is an application of finite energy sum rules, relates the S parameter for theories with unknown particle spectra to known OPE coefficients. We identify the sources of error inherent in the technique and evaluate them for several toy models to see if they can be controlled. The evaluation of errors is done analytically and all relevant formulas are provided in appendixes including analytical formulas for approximating the function 1/s with a polynomial in s. The use of analytical formulas protects us from introducing additional errors due to numerical integration. We find that it is very difficult to control the errors even when the momentum dependence of the OPE coefficients is known exactly. In realistic cases in which the momentum dependence of the OPE coefficients is only known perturbatively, it is impossible to obtain a reliable estimate. (c) 2000 The American Physical Society

  19. A robust methodology for modal parameters estimation applied to SHM

    Science.gov (United States)

    Cardoso, Rharã; Cury, Alexandre; Barbosa, Flávio

    2017-10-01

    The subject of structural health monitoring is drawing more and more attention over the last years. Many vibration-based techniques aiming at detecting small structural changes or even damage have been developed or enhanced through successive researches. Lately, several studies have focused on the use of raw dynamic data to assess information about structural condition. Despite this trend and much skepticism, many methods still rely on the use of modal parameters as fundamental data for damage detection. Therefore, it is of utmost importance that modal identification procedures are performed with a sufficient level of precision and automation. To fulfill these requirements, this paper presents a novel automated time-domain methodology to identify modal parameters based on a two-step clustering analysis. The first step consists in clustering modes estimates from parametric models of different orders, usually presented in stabilization diagrams. In an automated manner, the first clustering analysis indicates which estimates correspond to physical modes. To circumvent the detection of spurious modes or the loss of physical ones, a second clustering step is then performed. The second step consists in the data mining of information gathered from the first step. To attest the robustness and efficiency of the proposed methodology, numerically generated signals as well as experimental data obtained from a simply supported beam tested in laboratory and from a railway bridge are utilized. The results appeared to be more robust and accurate comparing to those obtained from methods based on one-step clustering analysis.

  20. Parameter estimation in space systems using recurrent neural networks

    Science.gov (United States)

    Parlos, Alexander G.; Atiya, Amir F.; Sunkel, John W.

    1991-01-01

    The identification of time-varying parameters encountered in space systems is addressed, using artificial neural systems. A hybrid feedforward/feedback neural network, namely a recurrent multilayer perception, is used as the model structure in the nonlinear system identification. The feedforward portion of the network architecture provides its well-known interpolation property, while through recurrency and cross-talk, the local information feedback enables representation of temporal variations in the system nonlinearities. The standard back-propagation-learning algorithm is modified and it is used for both the off-line and on-line supervised training of the proposed hybrid network. The performance of recurrent multilayer perceptron networks in identifying parameters of nonlinear dynamic systems is investigated by estimating the mass properties of a representative large spacecraft. The changes in the spacecraft inertia are predicted using a trained neural network, during two configurations corresponding to the early and late stages of the spacecraft on-orbit assembly sequence. The proposed on-line mass properties estimation capability offers encouraging results, though, further research is warranted for training and testing the predictive capabilities of these networks beyond nominal spacecraft operations.

  1. Parameter estimation and hypothesis testing in linear models

    CERN Document Server

    Koch, Karl-Rudolf

    1999-01-01

    The necessity to publish the second edition of this book arose when its third German edition had just been published. This second English edition is there­ fore a translation of the third German edition of Parameter Estimation and Hypothesis Testing in Linear Models, published in 1997. It differs from the first English edition by the addition of a new chapter on robust estimation of parameters and the deletion of the section on discriminant analysis, which has been more completely dealt with by the author in the book Bayesian In­ ference with Geodetic Applications, Springer-Verlag, Berlin Heidelberg New York, 1990. Smaller additions and deletions have been incorporated, to im­ prove the text, to point out new developments or to eliminate errors which became apparent. A few examples have been also added. I thank Springer-Verlag for publishing this second edition and for the assistance in checking the translation, although the responsibility of errors remains with the author. I also want to express my thanks...

  2. Analysis of Low Frequency Oscillation Using the Multi-Interval Parameter Estimation Method on a Rolling Blackout in the KEPCO System

    Directory of Open Access Journals (Sweden)

    Kwan-Shik Shim

    2017-04-01

    Full Text Available This paper describes a multiple time interval (“multi-interval” parameter estimation method. The multi-interval parameter estimation method estimates a parameter from a new multi-interval prediction error polynomial that can simultaneously consider multiple time intervals. The root of the multi-interval prediction error polynomial includes the effect on each time interval, and the important mode can be estimated by solving one polynomial for multiple time intervals or signals. The algorithm of the multi-interval parameter estimation method proposed in this paper is applied to the test function and the data measured from a PMU (phasor measurement unit installed in the KEPCO (Korea Electric Power Corporation system. The results confirm that the proposed multi-interval parameter estimation method accurately and reliably estimates important parameters.

  3. A novel velocity estimator using multiple frequency carriers

    DEFF Research Database (Denmark)

    Zhang, Zhuo; Jakobsson, Andreas; Nikolov, Svetoslav

    2004-01-01

    . In this paper, we propose a nonlinear least squares (NLS) estimator. Typically, NLS estimators are computationally cumbersome, in general requiring the minimization of a multidimensional and often multimodal cost function. Here, by noting that the unknown velocity will result in a common known frequency......Most modern ultrasound scanners use the so-called pulsed-wave Doppler technique to estimate the blood velocities. Among the narrowband-based methods, the autocorrelation estimator and the Fourier-based method are the most commonly used approaches. Due to the low level of the blood echo, the signal......-to-noise ratio is low, and some averaging in depth is applied to improve the estimate. Further, due to velocity gradients in space and time, the spectrum may get smeared. An alternative approach is to use a pulse with multiple frequency carriers, and do some form of averaging in the frequency domain. However...

  4. Nonlinear Parameter Estimation in Microbiological Degradation Systems and Statistic Test for Common Estimation

    DEFF Research Database (Denmark)

    Sommer, Helle Mølgaard; Holst, Helle; Spliid, Henrik

    1995-01-01

    Three identical microbiological experiments were carried out and analysed in order to examine the variability of the parameter estimates. The microbiological system consisted of a substrate (toluene) and a biomass (pure culture) mixed together in an aquifer medium. The degradation of the substrate...

  5. Thermodynamic criteria for estimating the kinetic parameters of catalytic reactions

    Science.gov (United States)

    Mitrichev, I. I.; Zhensa, A. V.; Kol'tsova, E. M.

    2017-01-01

    Kinetic parameters are estimated using two criteria in addition to the traditional criterion that considers the consistency between experimental and modeled conversion data: thermodynamic consistency and the consistency with entropy production (i.e., the absolute rate of the change in entropy due to exchange with the environment is consistent with the rate of entropy production in the steady state). A special procedure is developed and executed on a computer to achieve the thermodynamic consistency of a set of kinetic parameters with respect to both the standard entropy of a reaction and the standard enthalpy of a reaction. A problem of multi-criterion optimization, reduced to a single-criterion problem by summing weighted values of the three criteria listed above, is solved. Using the reaction of NO reduction with CO on a platinum catalyst as an example, it is shown that the set of parameters proposed by D.B. Mantri and P. Aghalayam gives much worse agreement with experimental values than the set obtained on the basis of three criteria: the sum of the squares of deviations for conversion, the thermodynamic consistency, and the consistency with entropy production.

  6. Estimating negative binomial parameters from occurrence data with detection times.

    Science.gov (United States)

    Hwang, Wen-Han; Huggins, Richard; Stoklosa, Jakub

    2016-11-01

    The negative binomial distribution is a common model for the analysis of count data in biology and ecology. In many applications, we may not observe the complete frequency count in a quadrat but only that a species occurred in the quadrat. If only occurrence data are available then the two parameters of the negative binomial distribution, the aggregation index and the mean, are not identifiable. This can be overcome by data augmentation or through modeling the dependence between quadrat occupancies. Here, we propose to record the (first) detection time while collecting occurrence data in a quadrat. We show that under what we call proportionate sampling, where the time to survey a region is proportional to the area of the region, that both negative binomial parameters are estimable. When the mean parameter is larger than two, our proposed approach is more efficient than the data augmentation method developed by Solow and Smith (, Am. Nat. 176, 96-98), and in general is cheaper to conduct. We also investigate the effect of misidentification when collecting negative binomially distributed data, and conclude that, in general, the effect can be simply adjusted for provided that the mean and variance of misidentification probabilities are known. The results are demonstrated in a simulation study and illustrated in several real examples. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Estimation Parameters And Modelling Zero Inflated Negative Binomial

    Directory of Open Access Journals (Sweden)

    Cindy Cahyaning Astuti

    2016-11-01

    Full Text Available Regression analysis is used to determine relationship between one or several response variable (Y with one or several predictor variables (X. Regression model between predictor variables and the Poisson distributed response variable is called Poisson Regression Model. Since, Poisson Regression requires an equality between mean and variance, it is not appropriate to apply this model on overdispersion (variance is higher than mean. Poisson regression model is commonly used to analyze the count data. On the count data type, it is often to encounteredd some observations that have zero value with large proportion of zero value on the response variable (zero Inflation. Poisson regression can be used to analyze count data but it has not been able to solve problem of excess zero value on the response variable. An alternative model which is more suitable for overdispersion data and can solve the problem of excess zero value on the response variable is Zero Inflated Negative Binomial (ZINB. In this research, ZINB is applied on the case of Tetanus Neonatorum in East Java. The aim of this research is to examine the likelihood function and to form an algorithm to estimate the parameter of ZINB and also applying ZINB model in the case of Tetanus Neonatorum in East Java. Maximum Likelihood Estimation (MLE method is used to estimate the parameter on ZINB and the likelihood function is maximized using Expectation Maximization (EM algorithm. Test results of ZINB regression model showed that the predictor variable have a partial significant effect at negative binomial model is the percentage of pregnant women visits and the percentage of maternal health personnel assisted, while the predictor variables that have a partial significant effect at zero inflation model is the percentage of neonatus visits.

  8. Automated modal parameter estimation using correlation analysis and bootstrap sampling

    Science.gov (United States)

    Yaghoubi, Vahid; Vakilzadeh, Majid K.; Abrahamsson, Thomas J. S.

    2018-02-01

    The estimation of modal parameters from a set of noisy measured data is a highly judgmental task, with user expertise playing a significant role in distinguishing between estimated physical and noise modes of a test-piece. Various methods have been developed to automate this procedure. The common approach is to identify models with different orders and cluster similar modes together. However, most proposed methods based on this approach suffer from high-dimensional optimization problems in either the estimation or clustering step. To overcome this problem, this study presents an algorithm for autonomous modal parameter estimation in which the only required optimization is performed in a three-dimensional space. To this end, a subspace-based identification method is employed for the estimation and a non-iterative correlation-based method is used for the clustering. This clustering is at the heart of the paper. The keys to success are correlation metrics that are able to treat the problems of spatial eigenvector aliasing and nonunique eigenvectors of coalescent modes simultaneously. The algorithm commences by the identification of an excessively high-order model from frequency response function test data. The high number of modes of this model provides bases for two subspaces: one for likely physical modes of the tested system and one for its complement dubbed the subspace of noise modes. By employing the bootstrap resampling technique, several subsets are generated from the same basic dataset and for each of them a model is identified to form a set of models. Then, by correlation analysis with the two aforementioned subspaces, highly correlated modes of these models which appear repeatedly are clustered together and the noise modes are collected in a so-called Trashbox cluster. Stray noise modes attracted to the mode clusters are trimmed away in a second step by correlation analysis. The final step of the algorithm is a fuzzy c-means clustering procedure applied to

  9. Learn-as-you-go acceleration of cosmological parameter estimates

    International Nuclear Information System (INIS)

    Aslanyan, Grigor; Easther, Richard; Price, Layne C.

    2015-01-01

    Cosmological analyses can be accelerated by approximating slow calculations using a training set, which is either precomputed or generated dynamically. However, this approach is only safe if the approximations are well understood and controlled. This paper surveys issues associated with the use of machine-learning based emulation strategies for accelerating cosmological parameter estimation. We describe a learn-as-you-go algorithm that is implemented in the Cosmo++ code and (1) trains the emulator while simultaneously estimating posterior probabilities; (2) identifies unreliable estimates, computing the exact numerical likelihoods if necessary; and (3) progressively learns and updates the error model as the calculation progresses. We explicitly describe and model the emulation error and show how this can be propagated into the posterior probabilities. We apply these techniques to the Planck likelihood and the calculation of ΛCDM posterior probabilities. The computation is significantly accelerated without a pre-defined training set and uncertainties in the posterior probabilities are subdominant to statistical fluctuations. We have obtained a speedup factor of 6.5 for Metropolis-Hastings and 3.5 for nested sampling. Finally, we discuss the general requirements for a credible error model and show how to update them on-the-fly

  10. Improving filtering and prediction of spatially extended turbulent systems with model errors through stochastic parameter estimation

    International Nuclear Information System (INIS)

    Gershgorin, B.; Harlim, J.; Majda, A.J.

    2010-01-01

    The filtering and predictive skill for turbulent signals is often limited by the lack of information about the true dynamics of the system and by our inability to resolve the assumed dynamics with sufficiently high resolution using the current computing power. The standard approach is to use a simple yet rich family of constant parameters to account for model errors through parameterization. This approach can have significant skill by fitting the parameters to some statistical feature of the true signal; however in the context of real-time prediction, such a strategy performs poorly when intermittent transitions to instability occur. Alternatively, we need a set of dynamic parameters. One strategy for estimating parameters on the fly is a stochastic parameter estimation through partial observations of the true signal. In this paper, we extend our newly developed stochastic parameter estimation strategy, the Stochastic Parameterization Extended Kalman Filter (SPEKF), to filtering sparsely observed spatially extended turbulent systems which exhibit abrupt stability transition from time to time despite a stable average behavior. For our primary numerical example, we consider a turbulent system of externally forced barotropic Rossby waves with instability introduced through intermittent negative damping. We find high filtering skill of SPEKF applied to this toy model even in the case of very sparse observations (with only 15 out of the 105 grid points observed) and with unspecified external forcing and damping. Additive and multiplicative bias corrections are used to learn the unknown features of the true dynamics from observations. We also present a comprehensive study of predictive skill in the one-mode context including the robustness toward variation of stochastic parameters, imperfect initial conditions and finite ensemble effect. Furthermore, the proposed stochastic parameter estimation scheme applied to the same spatially extended Rossby wave system demonstrates

  11. Centrifuge modeling of one-step outflow tests for unsaturated parameter estimations

    Directory of Open Access Journals (Sweden)

    H. Nakajima

    2006-01-01

    Full Text Available Centrifuge modeling of one-step outflow tests were carried out using a 2-m radius geotechnical centrifuge, and the cumulative outflow and transient pore water pressure were measured during the tests at multiple gravity levels. Based on the scaling laws of centrifuge modeling, the measurements generally showed reasonable agreement with prototype data calculated from forward simulations with input parameters determined from standard laboratory tests. The parameter optimizations were examined for three different combinations of input data sets using the test measurements. Within the gravity level examined in this study up to 40g, the optimized unsaturated parameters compared well when accurate pore water pressure measurements were included along with cumulative outflow as input data. With its capability to implement variety of instrumentations under well controlled initial and boundary conditions and to shorten testing time, the centrifuge modeling technique is attractive as an alternative experimental method that provides more freedom to set inverse problem conditions for the parameter estimation.

  12. Centrifuge modeling of one-step outflow tests for unsaturated parameter estimations

    Science.gov (United States)

    Nakajima, H.; Stadler, A. T.

    2006-10-01

    Centrifuge modeling of one-step outflow tests were carried out using a 2-m radius geotechnical centrifuge, and the cumulative outflow and transient pore water pressure were measured during the tests at multiple gravity levels. Based on the scaling laws of centrifuge modeling, the measurements generally showed reasonable agreement with prototype data calculated from forward simulations with input parameters determined from standard laboratory tests. The parameter optimizations were examined for three different combinations of input data sets using the test measurements. Within the gravity level examined in this study up to 40g, the optimized unsaturated parameters compared well when accurate pore water pressure measurements were included along with cumulative outflow as input data. With its capability to implement variety of instrumentations under well controlled initial and boundary conditions and to shorten testing time, the centrifuge modeling technique is attractive as an alternative experimental method that provides more freedom to set inverse problem conditions for the parameter estimation.

  13. Estimation of the Alpha Factor Parameters Using the ICDE Database

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Dae Il; Hwang, M. J.; Han, S. H

    2007-04-15

    Detailed common cause failure (CCF) analysis generally need for the data for CCF events of other nuclear power plants because the CCF events rarely occur. KAERI has been participated at the international common cause failure data exchange (ICDE) project to get the data for the CCF events. The operation office of the ICDE project sent the CCF event data for EDG to the KAERI at December 2006. As a pilot study, we performed the detailed CCF analysis of EDGs for Yonggwang Units 3 and 4 and Ulchin Units 3 and 4 using the ICDE database. There are two onsite EDGs for each NPP. When an offsite power and the two onsite EDGs are not available, one alternate AC (AAC) diesel generator (hereafter AAC) is provided. Two onsite EDGs and the AAC are manufactured by the same company, but they are designed differently. We estimated the Alpha Factor and the CCF probability for the cases where three EDGs were assumed to be identically designed, and for those were assumed to be not identically designed. For the cases where three EDGs were assumed to be identically designed, double CCF probabilities of Yonggwang Units 3/4 and Ulchin Units 3/4 for 'fails to start' were estimated as 2.20E-4 and 2.10E-4, respectively. Triple CCF probabilities of those were estimated as 2.39E-4 and 2.42E-4, respectively. As each NPP has no experience for 'fails to run', Yonggwang Units 3/4 and Ulchin Units 3/4 have the same CCF probability. The estimated double and triple CCF probabilities for 'fails to run' are 4.21E-4 and 4.61E-4, respectively. Quantification results show that the system unavailability for the cases where the three EDGs are identical is higher than that where the three EDGs are different. The estimated system unavailability of the former case was increased by 3.4% comparing with that of the latter. As a future study, a computerization work for the estimations of the CCF parameters will be performed.

  14. Equations for estimating synthetic unit-hydrograph parameter values for small watersheds in Lake County, Illinois

    Science.gov (United States)

    Melching, C.S.; Marquardt, J.S.

    1997-01-01

    Design hydrographs computed from design storms, simple models of abstractions (interception, depression storage, and infiltration), and synthetic unit hydrographs provide vital information for stormwater, flood-plain, and water-resources management throughout the United States. Rainfall and runoff data for small watersheds in Lake County collected between 1990 and 1995 were studied to develop equations for estimation of synthetic unit-hydrograph parameters on the basis of watershed and storm characteristics. The synthetic unit-hydrograph parameters of interest were the time of concentration (TC) and watershed-storage coefficient (R) for the Clark unit-hydrograph method, the unit-graph lag (UL) for the Soil Conservation Service (now known as the Natural Resources Conservation Service) dimensionless unit hydrograph, and the hydrograph-time lag (TL) for the linear-reservoir method for unit-hydrograph estimation. Data from 66 storms with effective-precipitation depths greater than 0.4 inches on 9 small watersheds (areas between 0.06 and 37 square miles (mi2)) were utilized to develop the estimation equations, and data from 11 storms on 8 of these watersheds were utilized to verify (test) the estimation equations. The synthetic unit-hydrograph parameters were determined by calibration using the U.S. Army Corps of Engineers Flood Hydrograph Package HEC-1 (TC, R, and UL) or by manual analysis of the rainfall and run-off data (TL). The relation between synthetic unit-hydrograph parameters, and watershed and storm characteristics was determined by multiple linear regression of the logarithms of the parameters and characteristics. Separate sets of equations were developed with watershed area and main channel length as the starting parameters. Percentage of impervious cover, main channel slope, and depth of effective precipitation also were identified as important characteristics for estimation of synthetic unit-hydrograph parameters. The estimation equations utilizing area

  15. Effects of LiDAR point density, sampling size and height threshold on estimation accuracy of crop biophysical parameters.

    Science.gov (United States)

    Luo, Shezhou; Chen, Jing M; Wang, Cheng; Xi, Xiaohuan; Zeng, Hongcheng; Peng, Dailiang; Li, Dong

    2016-05-30

    Vegetation leaf area index (LAI), height, and aboveground biomass are key biophysical parameters. Corn is an important and globally distributed crop, and reliable estimations of these parameters are essential for corn yield forecasting, health monitoring and ecosystem modeling. Light Detection and Ranging (LiDAR) is considered an effective technology for estimating vegetation biophysical parameters. However, the estimation accuracies of these parameters are affected by multiple factors. In this study, we first estimated corn LAI, height and biomass (R2 = 0.80, 0.874 and 0.838, respectively) using the original LiDAR data (7.32 points/m2), and the results showed that LiDAR data could accurately estimate these biophysical parameters. Second, comprehensive research was conducted on the effects of LiDAR point density, sampling size and height threshold on the estimation accuracy of LAI, height and biomass. Our findings indicated that LiDAR point density had an important effect on the estimation accuracy for vegetation biophysical parameters, however, high point density did not always produce highly accurate estimates, and reduced point density could deliver reasonable estimation results. Furthermore, the results showed that sampling size and height threshold were additional key factors that affect the estimation accuracy of biophysical parameters. Therefore, the optimal sampling size and the height threshold should be determined to improve the estimation accuracy of biophysical parameters. Our results also implied that a higher LiDAR point density, larger sampling size and height threshold were required to obtain accurate corn LAI estimation when compared with height and biomass estimations. In general, our results provide valuable guidance for LiDAR data acquisition and estimation of vegetation biophysical parameters using LiDAR data.

  16. Estimation of genetic parameters for reproductive traits in Shall sheep.

    Science.gov (United States)

    Amou Posht-e-Masari, Hesam; Shadparvar, Abdol Ahad; Ghavi Hossein-Zadeh, Navid; Hadi Tavatori, Mohammad Hossein

    2013-06-01

    The objective of this study was to estimate genetic parameters for reproductive traits in Shall sheep. Data included 1,316 records on reproductive performances of 395 Shall ewes from 41 sires and 136 dams which were collected from 2001 to 2007 in Shall breeding station in Qazvin province at the Northwest of Iran. Studied traits were litter size at birth (LSB), litter size at weaning (LSW), litter mean weight per lamb born (LMWLB), litter mean weight per lamb weaned (LMWLW), total litter weight at birth (TLWB), and total litter weight at weaning (TLWW). Test of significance to include fixed effects in the statistical model was performed using the general linear model procedure of SAS. The effects of lambing year and ewe age at lambing were significant (Psheep.

  17. Estimating Phenomenological Parameters in Multi-Assets Markets

    Science.gov (United States)

    Raffaelli, Giacomo; Marsili, Matteo

    Financial correlations exhibit a non-trivial dynamic behavior. This is reproduced by a simple phenomenological model of a multi-asset financial market, which takes into account the impact of portfolio investment on price dynamics. This captures the fact that correlations determine the optimal portfolio but are affected by investment based on it. Such a feedback on correlations gives rise to an instability when the volume of investment exceeds a critical value. Close to the critical point the model exhibits dynamical correlations very similar to those observed in real markets. We discuss how the model's parameter can be estimated in real market data with a maximum likelihood principle. This confirms the main conclusion that real markets operate close to a dynamically unstable point.

  18. Dynamic systems models new methods of parameter and state estimation

    CERN Document Server

    2016-01-01

    This monograph is an exposition of a novel method for solving inverse problems, a method of parameter estimation for time series data collected from simulations of real experiments. These time series might be generated by measuring the dynamics of aircraft in flight, by the function of a hidden Markov model used in bioinformatics or speech recognition or when analyzing the dynamics of asset pricing provided by the nonlinear models of financial mathematics. Dynamic Systems Models demonstrates the use of algorithms based on polynomial approximation which have weaker requirements than already-popular iterative methods. Specifically, they do not require a first approximation of a root vector and they allow non-differentiable elements in the vector functions being approximated. The text covers all the points necessary for the understanding and use of polynomial approximation from the mathematical fundamentals, through algorithm development to the application of the method in, for instance, aeroplane flight dynamic...

  19. Cosmological Parameter Estimation with Large Scale Structure Observations

    CERN Document Server

    Di Dio, Enea; Durrer, Ruth; Lesgourgues, Julien

    2014-01-01

    We estimate the sensitivity of future galaxy surveys to cosmological parameters, using the redshift dependent angular power spectra of galaxy number counts, $C_\\ell(z_1,z_2)$, calculated with all relativistic corrections at first order in perturbation theory. We pay special attention to the redshift dependence of the non-linearity scale and present Fisher matrix forecasts for Euclid-like and DES-like galaxy surveys. We compare the standard $P(k)$ analysis with the new $C_\\ell(z_1,z_2)$ method. We show that for surveys with photometric redshifts the new analysis performs significantly better than the $P(k)$ analysis. For spectroscopic redshifts, however, the large number of redshift bins which would be needed to fully profit from the redshift information, is severely limited by shot noise. We also identify surveys which can measure the lensing contribution and we study the monopole, $C_0(z_1,z_2)$.

  20. Review of methods for level density estimation from resonance parameters

    International Nuclear Information System (INIS)

    Froehner, F.H.

    1983-01-01

    A number of methods are available for statistical analysis of resonance parameter sets, i.e. for estimation of level densities and average widths with account of missing levels. The main categories are (i) methods based on theories of level spacings (orthogonal-ensemble theory, Dyson-Mehta statistics), (ii) methods based on comparison with simulated cross section curves (Monte Carlo simulation, Garrison's autocorrelation method), (iii) methods exploiting the observed neutron width distribution by means of Bayesian or more approximate procedures such as maximum-likelihood, least-squares or moment methods, with various recipes for the treatment of detection thresholds and resolution effects. The present review will concentrate on (iii) with the aim of clarifying the basic mathematical concepts and the relationship between the various techniques. Recent theoretical progress in the treatment of resolution effects, detectability thresholds and p-wave admixture is described. (Auth.)

  1. MANOVA, LDA, and FA criteria in clusters parameter estimation

    Directory of Open Access Journals (Sweden)

    Stan Lipovetsky

    2015-12-01

    Full Text Available Multivariate analysis of variance (MANOVA and linear discriminant analysis (LDA apply such well-known criteria as the Wilks’ lambda, Lawley–Hotelling trace, and Pillai’s trace test for checking quality of the solutions. The current paper suggests using these criteria for building objectives for finding clusters parameters because optimizing such objectives corresponds to the best distinguishing between the clusters. Relation to Joreskog’s classification for factor analysis (FA techniques is also considered. The problem can be reduced to the multinomial parameterization, and solution can be found in a nonlinear optimization procedure which yields the estimates for the cluster centers and sizes. This approach for clustering works with data compressed into covariance matrix so can be especially useful for big data.

  2. Transient analysis of intercalation electrodes for parameter estimation

    Science.gov (United States)

    Devan, Sheba

    An essential part of integrating batteries as power sources in any application, be it a large scale automotive application or a small scale portable application, is an efficient Battery Management System (BMS). The combination of a battery with the microprocessor based BMS (called "smart battery") helps prolong the life of the battery by operating in the optimal regime and provides accurate information regarding the battery to the end user. The main purposes of BMS are cell protection, monitoring and control, and communication between different components. These purposes are fulfilled by tracking the change in the parameters of the intercalation electrodes in the batteries. Consequently, the functions of the BMS should be prompt, which requires the methodology of extracting the parameters to be efficient in time. The traditional transient techniques applied so far may not be suitable due to reasons such as the inability to apply these techniques when the battery is under operation, long experimental time, etc. The primary aim of this research work is to design a fast, accurate and reliable technique that can be used to extract parameter values of the intercalation electrodes. A methodology based on analysis of the short time response to a sinusoidal input perturbation, in the time domain is demonstrated using a porous electrode model for an intercalation electrode. It is shown that the parameters associated with the interfacial processes occurring in the electrode can be determined rapidly, within a few milliseconds, by measuring the response in the transient region. The short time analysis in the time domain is then extended to a single particle model that involves bulk diffusion in the solid phase in addition to interfacial processes. A systematic procedure for sequential parameter estimation using sensitivity analysis is described. Further, the short time response and the input perturbation are transformed into the frequency domain using Fast Fourier Transform

  3. Smoothing of, and parameter estimation from, noisy biophysical recordings.

    Directory of Open Access Journals (Sweden)

    Quentin J M Huys

    2009-05-01

    Full Text Available Biophysically detailed models of single cells are difficult to fit to real data. Recent advances in imaging techniques allow simultaneous access to various intracellular variables, and these data can be used to significantly facilitate the modelling task. These data, however, are noisy, and current approaches to building biophysically detailed models are not designed to deal with this. We extend previous techniques to take the noisy nature of the measurements into account. Sequential Monte Carlo ("particle filtering" methods, in combination with a detailed biophysical description of a cell, are used for principled, model-based smoothing of noisy recording data. We also provide an alternative formulation of smoothing where the neural nonlinearities are estimated in a non-parametric manner. Biophysically important parameters of detailed models (such as channel densities, intercompartmental conductances, input resistances, and observation noise are inferred automatically from noisy data via expectation-maximization. Overall, we find that model-based smoothing is a powerful, robust technique for smoothing of noisy biophysical data and for inference of biophysical parameters in the face of recording noise.

  4. Project Parameter Estimation on the Basis of an Erp Database

    Directory of Open Access Journals (Sweden)

    Relich Marcin

    2013-12-01

    Full Text Available Nowadays, more and more enterprises are using Enterprise Resource Planning (EPR systems that can also be used to plan and control the development of new products. In order to obtain a project schedule, certain parameters (e.g. duration have to be specified in an ERP system. These parameters can be defined by the employees according to their knowledge, or can be estimated on the basis of data from previously completed projects. This paper investigates using an ERP database to identify those variables that have a significant influence on the duration of a project phase. In the paper, a model of knowledge discovery from an ERP database is proposed. The presented method contains four stages of the knowledge discovery process such as data selection, data transformation, data mining and interpretation of patterns in the context of new product development. Among data mining techniques, a fuzzy neural system is chosen to seek relationships on the basis of data from completed projects stored in an ERP system.

  5. Bayesian parameter estimation for stochastic models of biological cell migration

    Science.gov (United States)

    Dieterich, Peter; Preuss, Roland

    2013-08-01

    Cell migration plays an essential role under many physiological and patho-physiological conditions. It is of major importance during embryonic development and wound healing. In contrast, it also generates negative effects during inflammation processes, the transmigration of tumors or the formation of metastases. Thus, a reliable quantification and characterization of cell paths could give insight into the dynamics of these processes. Typically stochastic models are applied where parameters are extracted by fitting models to the so-called mean square displacement of the observed cell group. We show that this approach has several disadvantages and problems. Therefore, we propose a simple procedure directly relying on the positions of the cell's trajectory and the covariance matrix of the positions. It is shown that the covariance is identical with the spatial aging correlation function for the supposed linear Gaussian models of Brownian motion with drift and fractional Brownian motion. The technique is applied and illustrated with simulated data showing a reliable parameter estimation from single cell paths.

  6. Estimation of fracture parameters using elastic full-waveform inversion

    KAUST Repository

    Zhang, Zhendong

    2017-08-17

    Current methodologies to characterize fractures at the reservoir scale have serious limitations in spatial resolution and suffer from uncertainties in the inverted parameters. Here, we propose to estimate the spatial distribution and physical properties of fractures using full-waveform inversion (FWI) of multicomponent surface seismic data. An effective orthorhombic medium with five clusters of vertical fractures distributed in a checkboard fashion is used to test the algorithm. A shape regularization term is added to the objective function to improve the estimation of the fracture azimuth, which is otherwise poorly constrained. The cracks are assumed to be penny-shaped to reduce the nonuniqueness in the inverted fracture weaknesses and achieve a faster convergence. To better understand the inversion results, we analyze the radiation patterns induced by the perturbations in the fracture weaknesses and orientation. Due to the high-resolution potential of elastic FWI, the developed algorithm can recover the spatial fracture distribution and identify localized “sweet spots” of intense fracturing. However, the fracture azimuth can be resolved only using long-offset data.

  7. Estimation of Multiple Pitches in Stereophonic Mixtures using a Codebook-based Approach

    DEFF Research Database (Denmark)

    Hansen, Martin Weiss; Jensen, Jesper Rindom; Christensen, Mads Græsbøll

    2017-01-01

    In this paper, a method for multi-pitch estimation of stereophonic mixtures of multiple harmonic signals is presented. The method is based on a signal model which takes the amplitude and delay panning parameters of the sources in a stereophonic mixture into account. Furthermore, the method is based...... on the extended invariance principle (EXIP), and a codebook of realistic amplitude vectors. For each fundamental frequency candidate in each of the sources, the amplitude estimates are mapped to entries in the codebook, and the pitch and model order are estimated jointly. The performance of the proposed method...

  8. Estimation of genetic parameters for reproductive traits in alpacas.

    Science.gov (United States)

    Cruz, A; Cervantes, I; Burgos, A; Morante, R; Gutiérrez, J P

    2015-12-01

    One of the main deficiencies affecting animal breeding programs in Peruvian alpacas is the low reproductive performance leading to low number of animals available to select from, decreasing strongly the selection intensity. Some reproductive traits could be improved by artificial selection, but very few information about genetic parameters exists for these traits in this specie. The aim of this study was to estimate genetic parameters for six reproductive traits in alpacas both in Suri (SU) and Huacaya (HU) ecotypes, as well as their genetic relationship with fiber and morphological traits. Dataset belonging to Pacomarca experimental farm collected between 2000 and 2014 was used. Number of records for age at first service (AFS), age at first calving (AFC), copulation time (CT), pregnancy diagnosis (PD), gestation length (GL), and calving interval (CI) were, respectively, 1704, 854, 19,770, 5874, 4290 and 934. Pedigree consisted of 7742 animals. Regarding reproductive traits, model of analysis included additive and residual random effects for all traits, and also permanent environmental effect for CT, PD, GL and CI traits, with color and year of recording as fixed effects for all the reproductive traits and also age at mating and sex of calf for GL trait. Estimated heritabilities, respectively for HU and SU were 0.19 and 0.09 for AFS, 0.45 and 0.59 for AFC, 0.04 and 0.05 for CT, 0.07 and 0.05 for PD, 0.12 and 0.20 for GL, and 0.14 and 0.09 for CI. Genetic correlations between them ranged from -0.96 to 0.70. No important genetic correlations were found between reproductive traits and fiber or morphological traits in HU. However, some moderate favorable genetic correlations were found between reproductive and either fiber and morphological traits in SU. According to estimated genetic correlations, some reproductive traits might be included as additional selection criteria in HU. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Choice of the parameters of the cusum algorithms for parameter estimation in the markov modulated poisson process

    OpenAIRE

    Burkatovskaya, Yuliya Borisovna; Kabanova, T.; Khaustov, Pavel Aleksandrovich

    2016-01-01

    CUSUM algorithm for controlling chain state switching in the Markov modulated Poissonprocess was investigated via simulation. Recommendations concerning the parameter choice were givensubject to characteristics of the process. Procedure of the process parameter estimation was described.

  10. PAPIRUS, a parallel computing framework for sensitivity analysis, uncertainty propagation, and estimation of parameter distribution

    International Nuclear Information System (INIS)

    Heo, Jaeseok; Kim, Kyung Doo

    2015-01-01

    Highlights: • We developed an interface between an engineering simulation code and statistical analysis software. • Multiple packages of the sensitivity analysis, uncertainty quantification, and parameter estimation algorithms are implemented in the framework. • Parallel computing algorithms are also implemented in the framework to solve multiple computational problems simultaneously. - Abstract: This paper introduces a statistical data analysis toolkit, PAPIRUS, designed to perform the model calibration, uncertainty propagation, Chi-square linearity test, and sensitivity analysis for both linear and nonlinear problems. The PAPIRUS was developed by implementing multiple packages of methodologies, and building an interface between an engineering simulation code and the statistical analysis algorithms. A parallel computing framework is implemented in the PAPIRUS with multiple computing resources and proper communications between the server and the clients of each processor. It was shown that even though a large amount of data is considered for the engineering calculation, the distributions of the model parameters and the calculation results can be quantified accurately with significant reductions in computational effort. A general description about the PAPIRUS with a graphical user interface is presented in Section 2. Sections 2.1–2.5 present the methodologies of data assimilation, uncertainty propagation, Chi-square linearity test, and sensitivity analysis implemented in the toolkit with some results obtained by each module of the software. Parallel computing algorithms adopted in the framework to solve multiple computational problems simultaneously are also summarized in the paper

  11. PAPIRUS, a parallel computing framework for sensitivity analysis, uncertainty propagation, and estimation of parameter distribution

    Energy Technology Data Exchange (ETDEWEB)

    Heo, Jaeseok, E-mail: jheo@kaeri.re.kr; Kim, Kyung Doo, E-mail: kdkim@kaeri.re.kr

    2015-10-15

    Highlights: • We developed an interface between an engineering simulation code and statistical analysis software. • Multiple packages of the sensitivity analysis, uncertainty quantification, and parameter estimation algorithms are implemented in the framework. • Parallel computing algorithms are also implemented in the framework to solve multiple computational problems simultaneously. - Abstract: This paper introduces a statistical data analysis toolkit, PAPIRUS, designed to perform the model calibration, uncertainty propagation, Chi-square linearity test, and sensitivity analysis for both linear and nonlinear problems. The PAPIRUS was developed by implementing multiple packages of methodologies, and building an interface between an engineering simulation code and the statistical analysis algorithms. A parallel computing framework is implemented in the PAPIRUS with multiple computing resources and proper communications between the server and the clients of each processor. It was shown that even though a large amount of data is considered for the engineering calculation, the distributions of the model parameters and the calculation results can be quantified accurately with significant reductions in computational effort. A general description about the PAPIRUS with a graphical user interface is presented in Section 2. Sections 2.1–2.5 present the methodologies of data assimilation, uncertainty propagation, Chi-square linearity test, and sensitivity analysis implemented in the toolkit with some results obtained by each module of the software. Parallel computing algorithms adopted in the framework to solve multiple computational problems simultaneously are also summarized in the paper.

  12. An optimal estimation algorithm to derive Ice and Ocean parameters from AMSR Microwave radiometer observations

    DEFF Research Database (Denmark)

    Pedersen, Leif Toudal; Tonboe, Rasmus T.; Høyer, Jacob

    channels as well as the combination of data from multiple sources such as microwave radiometry, scatterometry and numerical weather prediction. Optimal estimation is data assimilation without a numerical model for retrieving physical parameters from remote sensing using a multitude of available information......Global multispectral microwave radiometer measurements have been available for several decades. However, most current sea ice concentration algorithms still only takes advantage of a very limited subset of the available channels. Here we present a method that allows utilization of all available....... The methodology is observation driven and model innovation is limited to the translation between observation space and physical parameter space Over open water we use a semi-empirical radiative transfer model developed by Meissner & Wentz that estimates the multispectral AMSR brightness temperatures, i...

  13. Geometric Parameters Estimation and Calibration in Cone-Beam Micro-CT

    Directory of Open Access Journals (Sweden)

    Jintao Zhao

    2015-09-01

    Full Text Available The quality of Computed Tomography (CT images crucially depends on the precise knowledge of the scanner geometry. Therefore, it is necessary to estimate and calibrate the misalignments before image acquisition. In this paper, a Two-Piece-Ball (TPB phantom is used to estimate a set of parameters that describe the geometry of a cone-beam CT system. Only multiple projections of the TPB phantom at one position are required, which can avoid the rotation errors when acquiring multi-angle projections. Also, a corresponding algorithm is derived. The performance of the method is evaluated through simulation and experimental data. The results demonstrated that the proposed method is valid and easy to implement. Furthermore, the experimental results from the Micro-CT system demonstrate the ability to reduce artifacts and improve image quality through geometric parameter calibration.

  14. Multiple Illuminant Colour Estimation via Statistical Inference on Factor Graphs.

    Science.gov (United States)

    Mutimbu, Lawrence; Robles-Kelly, Antonio

    2016-08-31

    This paper presents a method to recover a spatially varying illuminant colour estimate from scenes lit by multiple light sources. Starting with the image formation process, we formulate the illuminant recovery problem in a statistically datadriven setting. To do this, we use a factor graph defined across the scale space of the input image. In the graph, we utilise a set of illuminant prototypes computed using a data driven approach. As a result, our method delivers a pixelwise illuminant colour estimate being devoid of libraries or user input. The use of a factor graph also allows for the illuminant estimates to be recovered making use of a maximum a posteriori (MAP) inference process. Moreover, we compute the probability marginals by performing a Delaunay triangulation on our factor graph. We illustrate the utility of our method for pixelwise illuminant colour recovery on widely available datasets and compare against a number of alternatives. We also show sample colour correction results on real-world images.

  15. Estimating demographic parameters from large-scale population genomic data using Approximate Bayesian Computation

    Directory of Open Access Journals (Sweden)

    Li Sen

    2012-03-01

    Full Text Available Abstract Background The Approximate Bayesian Computation (ABC approach has been used to infer demographic parameters for numerous species, including humans. However, most applications of ABC still use limited amounts of data, from a small number of loci, compared to the large amount of genome-wide population-genetic data which have become available in the last few years. Results We evaluated the performance of the ABC approach for three 'population divergence' models - similar to the 'isolation with migration' model - when the data consists of several hundred thousand SNPs typed for multiple individuals by simulating data from known demographic models. The ABC approach was used to infer demographic parameters of interest and we compared the inferred values to the true parameter values that was used to generate hypothetical "observed" data. For all three case models, the ABC approach inferred most demographic parameters quite well with narrow credible intervals, for example, population divergence times and past population sizes, but some parameters were more difficult to infer, such as population sizes at present and migration rates. We compared the ability of different summary statistics to infer demographic parameters, including haplotype and LD based statistics, and found that the accuracy of the parameter estimates can be improved by combining summary statistics that capture different parts of information in the data. Furthermore, our results suggest that poor choices of prior distributions can in some circumstances be detected using ABC. Finally, increasing the amount of data beyond some hundred loci will substantially improve the accuracy of many parameter estimates using ABC. Conclusions We conclude that the ABC approach can accommodate realistic genome-wide population genetic data, which may be difficult to analyze with full likelihood approaches, and that the ABC can provide accurate and precise inference of demographic parameters from

  16. Online state of charge and model parameter co-estimation based on a novel multi-timescale estimator for vanadium redox flow battery

    International Nuclear Information System (INIS)

    Wei, Zhongbao; Lim, Tuti Mariana; Skyllas-Kazacos, Maria; Wai, Nyunt; Tseng, King Jet

    2016-01-01

    Highlights: • Battery model parameters and SOC co-estimation is investigated. • The model parameters and OCV are decoupled and estimated independently. • Multiple timescales are adopted to improve precision and stability. • SOC is online estimated without using the open-circuit cell. • The method is robust to aging levels, flow rates, and battery chemistries. - Abstract: A key function of battery management system (BMS) is to provide accurate information of the state of charge (SOC) in real time, and this depends directly on the precise model parameterization. In this paper, a novel multi-timescale estimator is proposed to estimate the model parameters and SOC for vanadium redox flow battery (VRB) in real time. The model parameters and OCV are decoupled and estimated independently, effectively avoiding the possibility of cross interference between them. The analysis of model sensitivity, stability, and precision suggests the necessity of adopting different timescales for each estimator independently. Experiments are conducted to assess the performance of the proposed method. Results reveal that the model parameters are online adapted accurately thus the periodical calibration on them can be avoided. The online estimated terminal voltage and SOC are both benchmarked with the reference values. The proposed multi-timescale estimator has the merits of fast convergence, high precision, and good robustness against the initialization uncertainty, aging states, flow rates, and also battery chemistries.

  17. Maximum likelihood estimation of biophysical parameters of synaptic receptors from macroscopic currents

    Directory of Open Access Journals (Sweden)

    Andrey eStepanyuk

    2014-10-01

    Full Text Available Dendritic integration and neuronal firing patterns strongly depend on biophysical properties of synaptic ligand-gated channels. However, precise estimation of biophysical parameters of these channels in their intrinsic environment is complicated and still unresolved problem. Here we describe a novel method based on a maximum likelihood approach that allows to estimate not only the unitary current of synaptic receptor channels but also their multiple conductance levels, kinetic constants, the number of receptors bound with a neurotransmitter and the peak open probability from experimentally feasible number of postsynaptic currents. The new method also improves the accuracy of evaluation of unitary current as compared to the peak-scaled non-stationary fluctuation analysis, leading to a possibility to precisely estimate this important parameter from a few postsynaptic currents recorded in steady-state conditions. Estimation of unitary current with this method is robust even if postsynaptic currents are generated by receptors having different kinetic parameters, the case when peak-scaled non-stationary fluctuation analysis is not applicable. Thus, with the new method, routinely recorded postsynaptic currents could be used to study the properties of synaptic receptors in their native biochemical environment.

  18. Recursive parameter estimation for Hammerstein-Wiener systems using modified EKF algorithm.

    Science.gov (United States)

    Yu, Feng; Mao, Zhizhong; Yuan, Ping; He, Dakuo; Jia, Mingxing

    2017-09-01

    This paper focuses on the recursive parameter estimation for the single input single output Hammerstein-Wiener system model, and the study is then extended to a rarely mentioned multiple input single output Hammerstein-Wiener system. Inspired by the extended Kalman filter algorithm, two basic recursive algorithms are derived from the first and the second order Taylor approximation. Based on the form of the first order approximation algorithm, a modified algorithm with larger parameter convergence domain is proposed to cope with the problem of small parameter convergence domain of the first order one and the application limit of the second order one. The validity of the modification on the expansion of convergence domain is shown from the convergence analysis and is demonstrated with two simulation cases. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  19. AUTOMATIC ESTIMATION OF SIZE PARAMETERS USING VERIFIED COMPUTERIZED STEREOANALYSIS

    Directory of Open Access Journals (Sweden)

    Peter R Mouton

    2011-05-01

    Full Text Available State-of-the-art computerized stereology systems combine high-resolution video microscopy and hardwaresoftware integration with stereological methods to assist users in quantifying multidimensional parameters of importance to biomedical research, including volume, surface area, length, number, their variation and spatial distribution. The requirement for constant interactions between a trained, non-expert user and the targeted features of interest currently limits the throughput efficiency of these systems. To address this issue we developed a novel approach for automatic stereological analysis of 2-D images, Verified Computerized Stereoanalysis (VCS. The VCS approach minimizes the need for user interactions with high contrast [high signal-to-noise ratio (S:N] biological objects of interest. Performance testing of the VCS approach confirmed dramatic increases in the efficiency of total object volume (size estimation, without a loss of accuracy or precision compared to conventional computerized stereology. The broad application of high efficiency VCS to high-contrast biological objects on tissue sections could reduce labor costs, enhance hypothesis testing, and accelerate the progress of biomedical research focused on improvements in health and the management of disease.

  20. Estimation of Key Parameters of the Coupled Energy and Water Model by Assimilating Land Surface Data

    Science.gov (United States)

    Abdolghafoorian, A.; Farhadi, L.

    2017-12-01

    Accurate estimation of land surface heat and moisture fluxes, as well as root zone soil moisture, is crucial in various hydrological, meteorological, and agricultural applications. Field measurements of these fluxes are costly and cannot be readily scaled to large areas relevant to weather and climate studies. Therefore, there is a need for techniques to make quantitative estimates of heat and moisture fluxes using land surface state observations that are widely available from remote sensing across a range of scale. In this work, we applies the variational data assimilation approach to estimate land surface fluxes and soil moisture profile from the implicit information contained Land Surface Temperature (LST) and Soil Moisture (SM) (hereafter the VDA model). The VDA model is focused on the estimation of three key parameters: 1- neutral bulk heat transfer coefficient (CHN), 2- evaporative fraction from soil and canopy (EF), and 3- saturated hydraulic conductivity (Ksat). CHN and EF regulate the partitioning of available energy between sensible and latent heat fluxes. Ksat is one of the main parameters used in determining infiltration, runoff, groundwater recharge, and in simulating hydrological processes. In this study, a system of coupled parsimonious energy and water model will constrain the estimation of three unknown parameters in the VDA model. The profile of SM (LST) at multiple depths is estimated using moisture diffusion (heat diffusion) equation. In this study, the uncertainties of retrieved unknown parameters and fluxes are estimated from the inverse of Hesian matrix of cost function which is computed using the Lagrangian methodology. Analysis of uncertainty provides valuable information about the accuracy of estimated parameters and their correlation and guide the formulation of a well-posed estimation problem. The results of proposed algorithm are validated with a series of experiments using a synthetic data set generated by the simultaneous heat and

  1. Sensitivity of probability-of-failure estimates with respect to probability of detection curve parameters

    Energy Technology Data Exchange (ETDEWEB)

    Garza, J. [University of Texas at San Antonio, Mechanical Engineering, 1 UTSA circle, EB 3.04.50, San Antonio, TX 78249 (United States); Millwater, H., E-mail: harry.millwater@utsa.edu [University of Texas at San Antonio, Mechanical Engineering, 1 UTSA circle, EB 3.04.50, San Antonio, TX 78249 (United States)

    2012-04-15

    A methodology has been developed and demonstrated that can be used to compute the sensitivity of the probability-of-failure (POF) with respect to the parameters of inspection processes that are simulated using probability of detection (POD) curves. The formulation is such that the probabilistic sensitivities can be obtained at negligible cost using sampling methods by reusing the samples used to compute the POF. As a result, the methodology can be implemented for negligible cost in a post-processing non-intrusive manner thereby facilitating implementation with existing or commercial codes. The formulation is generic and not limited to any specific random variables, fracture mechanics formulation, or any specific POD curve as long as the POD is modeled parametrically. Sensitivity estimates for the cases of different POD curves at multiple inspections, and the same POD curves at multiple inspections have been derived. Several numerical examples are presented and show excellent agreement with finite difference estimates with significant computational savings. - Highlights: Black-Right-Pointing-Pointer Sensitivity of the probability-of-failure with respect to the probability-of-detection curve. Black-Right-Pointing-Pointer The sensitivities are computed with negligible cost using Monte Carlo sampling. Black-Right-Pointing-Pointer The change in the POF due to a change in the POD curve parameters can be easily estimated.

  2. Comparison of sampling methodologies and estimation of population parameters for a temporary fish ectoparasite

    Directory of Open Access Journals (Sweden)

    J.M. Artim

    2016-08-01

    Full Text Available Characterizing spatio-temporal variation in the density of organisms in a community is a crucial part of ecological study. However, doing so for small, motile, cryptic species presents multiple challenges, especially where multiple life history stages are involved. Gnathiid isopods are ecologically important marine ectoparasites, micropredators that live in substrate for most of their lives, emerging only once during each juvenile stage to feed on fish blood. Many gnathiid species are nocturnal and most have distinct substrate preferences. Studies of gnathiid use of habitat, exploitation of hosts, and population dynamics have used various trap designs to estimate rates of gnathiid emergence, study sensory ecology, and identify host susceptibility. In the studies reported here, we compare and contrast the performance of emergence, fish-baited and light trap designs, outline the key features of these traps, and determine some life cycle parameters derived from trap counts for the Eastern Caribbean coral-reef gnathiid, Gnathia marleyi. We also used counts from large emergence traps and light traps to estimate additional life cycle parameters, emergence rates, and total gnathiid density on substrate, and to calibrate the light trap design to provide estimates of rate of emergence and total gnathiid density in habitat not amenable to emergence trap deployment.

  3. Sensitivity of probability-of-failure estimates with respect to probability of detection curve parameters

    International Nuclear Information System (INIS)

    Garza, J.; Millwater, H.

    2012-01-01

    A methodology has been developed and demonstrated that can be used to compute the sensitivity of the probability-of-failure (POF) with respect to the parameters of inspection processes that are simulated using probability of detection (POD) curves. The formulation is such that the probabilistic sensitivities can be obtained at negligible cost using sampling methods by reusing the samples used to compute the POF. As a result, the methodology can be implemented for negligible cost in a post-processing non-intrusive manner thereby facilitating implementation with existing or commercial codes. The formulation is generic and not limited to any specific random variables, fracture mechanics formulation, or any specific POD curve as long as the POD is modeled parametrically. Sensitivity estimates for the cases of different POD curves at multiple inspections, and the same POD curves at multiple inspections have been derived. Several numerical examples are presented and show excellent agreement with finite difference estimates with significant computational savings. - Highlights: ► Sensitivity of the probability-of-failure with respect to the probability-of-detection curve. ►The sensitivities are computed with negligible cost using Monte Carlo sampling. ► The change in the POF due to a change in the POD curve parameters can be easily estimated.

  4. The PolyMAX Frequency-Domain Method: A New Standard for Modal Parameter Estimation?

    Directory of Open Access Journals (Sweden)

    Bart Peeters

    2004-01-01

    Full Text Available Recently, a new non-iterative frequency-domain parameter estimation method was proposed. It is based on a (weighted least-squares approach and uses multiple-input-multiple-output frequency response functions as primary data. This so-called “PolyMAX” or polyreference least-squares complex frequency-domain method can be implemented in a very similar way as the industry standard polyreference (time-domain least-squares complex exponential method: in a first step a stabilisation diagram is constructed containing frequency, damping and participation information. Next, the mode shapes are found in a second least-squares step, based on the user selection of stable poles. One of the specific advantages of the technique lies in the very stable identification of the system poles and participation factors as a function of the specified system order, leading to easy-to-interpret stabilisation diagrams. This implies a potential for automating the method and to apply it to “difficult” estimation cases such as high-order and/or highly damped systems with large modal overlap. Some real-life automotive and aerospace case studies are discussed. PolyMAX is compared with classical methods concerning stability, accuracy of the estimated modal parameters and quality of the frequency response function synthesis.

  5. Joint estimation over multiple individuals improves behavioural state inference from animal movement data.

    Science.gov (United States)

    Jonsen, Ian

    2016-02-08

    State-space models provide a powerful way to scale up inference of movement behaviours from individuals to populations when the inference is made across multiple individuals. Here, I show how a joint estimation approach that assumes individuals share identical movement parameters can lead to improved inference of behavioural states associated with different movement processes. I use simulated movement paths with known behavioural states to compare estimation error between nonhierarchical and joint estimation formulations of an otherwise identical state-space model. Behavioural state estimation error was strongly affected by the degree of similarity between movement patterns characterising the behavioural states, with less error when movements were strongly dissimilar between states. The joint estimation model improved behavioural state estimation relative to the nonhierarchical model for simulated data with heavy-tailed Argos location errors. When applied to Argos telemetry datasets from 10 Weddell seals, the nonhierarchical model estimated highly uncertain behavioural state switching probabilities for most individuals whereas the joint estimation model yielded substantially less uncertainty. The joint estimation model better resolved the behavioural state sequences across all seals. Hierarchical or joint estimation models should be the preferred choice for estimating behavioural states from animal movement data, especially when location data are error-prone.

  6. Automated Modal Parameter Estimation of Civil Engineering Structures

    DEFF Research Database (Denmark)

    Andersen, Palle; Brincker, Rune; Goursat, Maurice

    In this paper the problems of doing automatic modal parameter extraction of ambient excited civil engineering structures is considered. Two different approaches for obtaining the modal parameters automatically are presented: The Frequency Domain Decomposition (FDD) technique and a correlation...

  7. Estimation of uranium migration parameters in sandstone aquifers.

    Science.gov (United States)

    Malov, A I

    2016-03-01

    The chemical composition and isotopes of carbon and uranium were investigated in groundwater samples that were collected from 16 wells and 2 sources in the Northern Dvina Basin, Northwest Russia. Across the dataset, the temperatures in the groundwater ranged from 3.6 to 6.9 °C, the pH ranged from 7.6 to 9.0, the Eh ranged from -137 to +128 mV, the total dissolved solids (TDS) ranged from 209 to 22,000 mg L(-1), and the dissolved oxygen (DO) ranged from 0 to 9.9 ppm. The (14)C activity ranged from 0 to 69.96 ± 0.69 percent modern carbon (pmC). The uranium content in the groundwater ranged from 0.006 to 16 ppb, and the (234)U:(238)U activity ratio ranged from 1.35 ± 0.21 to 8.61 ± 1.35. The uranium concentration and (234)U:(238)U activity ratio increased from the recharge area to the redox barrier; behind the barrier, the uranium content is minimal. The results were systematized by creating a conceptual model of the Northern Dvina Basin's hydrogeological system. The use of uranium isotope dating in conjunction with radiocarbon dating allowed the determination of important water-rock interaction parameters, such as the dissolution rate:recoil loss factor ratio Rd:p (a(-1)) and the uranium retardation factor:recoil loss factor ratio R:p in the aquifer. The (14)C age of the water was estimated to be between modern and >35,000 years. The (234)U-(238)U age of the water was estimated to be between 260 and 582,000 years. The Rd:p ratio decreases with increasing groundwater residence time in the aquifer from n × 10(-5) to n × 10(-7) a(-1). This finding is observed because the TDS increases in that direction from 0.2 to 9 g L(-1), and accordingly, the mineral saturation indices increase. Relatively high values of R:p (200-1000) characterize aquifers in sandy-clayey sediments from the Late Pleistocene and the deepest parts of the Vendian strata. In samples from the sandstones of the upper part of the Vendian strata, the R:p value is ∼ 24, i.e., sorption processes are

  8. Dual-Layer Density Estimation for Multiple Object Instance Detection

    Directory of Open Access Journals (Sweden)

    Qiang Zhang

    2016-01-01

    Full Text Available This paper introduces a dual-layer density estimation-based architecture for multiple object instance detection in robot inventory management applications. The approach consists of raw scale-invariant feature transform (SIFT feature matching and key point projection. The dominant scale ratio and a reference clustering threshold are estimated using the first layer of the density estimation. A cascade of filters is applied after feature template reconstruction and refined feature matching to eliminate false matches. Before the second layer of density estimation, the adaptive threshold is finalized by multiplying an empirical coefficient for the reference value. The coefficient is identified experimentally. Adaptive threshold-based grid voting is applied to find all candidate object instances. Error detection is eliminated using final geometric verification in accordance with Random Sample Consensus (RANSAC. The detection results of the proposed approach are evaluated on a self-built dataset collected in a supermarket. The results demonstrate that the approach provides high robustness and low latency for inventory management application.

  9. Negative binomial models for abundance estimation of multiple closed populations

    Science.gov (United States)

    Boyce, Mark S.; MacKenzie, Darry I.; Manly, Bryan F.J.; Haroldson, Mark A.; Moody, David W.

    2001-01-01

    Counts of uniquely identified individuals in a population offer opportunities to estimate abundance. However, for various reasons such counts may be burdened by heterogeneity in the probability of being detected. Theoretical arguments and empirical evidence demonstrate that the negative binomial distribution (NBD) is a useful characterization for counts from biological populations with heterogeneity. We propose a method that focuses on estimating multiple populations by simultaneously using a suite of models derived from the NBD. We used this approach to estimate the number of female grizzly bears (Ursus arctos) with cubs-of-the-year in the Yellowstone ecosystem, for each year, 1986-1998. Akaike's Information Criteria (AIC) indicated that a negative binomial model with a constant level of heterogeneity across all years was best for characterizing the sighting frequencies of female grizzly bears. A lack-of-fit test indicated the model adequately described the collected data. Bootstrap techniques were used to estimate standard errors and 95% confidence intervals. We provide a Monte Carlo technique, which confirms that the Yellowstone ecosystem grizzly bear population increased during the period 1986-1998.

  10. A new method to estimate heat source parameters in gas metal arc welding simulation process

    International Nuclear Information System (INIS)

    Jia, Xiaolei; Xu, Jie; Liu, Zhaoheng; Huang, Shaojie; Fan, Yu; Sun, Zhi

    2014-01-01

    Highlights: •A new method for accurate simulation of heat source parameters was presented. •The partial least-squares regression analysis was recommended in the method. •The welding experiment results verified accuracy of the proposed method. -- Abstract: Heat source parameters were usually recommended by experience in welding simulation process, which induced error in simulation results (e.g. temperature distribution and residual stress). In this paper, a new method was developed to accurately estimate heat source parameters in welding simulation. In order to reduce the simulation complexity, a sensitivity analysis of heat source parameters was carried out. The relationships between heat source parameters and welding pool characteristics (fusion width (W), penetration depth (D) and peak temperature (T p )) were obtained with both the multiple regression analysis (MRA) and the partial least-squares regression analysis (PLSRA). Different regression models were employed in each regression method. Comparisons of both methods were performed. A welding experiment was carried out to verify the method. The results showed that both the MRA and the PLSRA were feasible and accurate for prediction of heat source parameters in welding simulation. However, the PLSRA was recommended for its advantages of requiring less simulation data

  11. Counting and confusion: Bayesian rate estimation with multiple populations

    Science.gov (United States)

    Farr, Will M.; Gair, Jonathan R.; Mandel, Ilya; Cutler, Curt

    2015-01-01

    We show how to obtain a Bayesian estimate of the rates or numbers of signal and background events from a set of events when the shapes of the signal and background distributions are known, can be estimated, or approximated; our method works well even if the foreground and background event distributions overlap significantly and the nature of any individual event cannot be determined with any certainty. We give examples of determining the rates of gravitational-wave events in the presence of background triggers from a template bank when noise parameters are known and/or can be fit from the trigger data. We also give an example of determining globular-cluster shape, location, and density from an observation of a stellar field that contains a nonuniform background density of stars superimposed on the cluster stars.

  12. Estimating Parameters for the Earth-Ionosphere Waveguide Using VLF Narrowband Transmitters

    Science.gov (United States)

    Gross, N. C.; Cohen, M.

    2017-12-01

    Estimating the D-region (60 to 90 km altitude) ionospheric electron density profile has always been a challenge. The D-region's altitude is too high for aircraft and balloons to reach but is too low for satellites to orbit at. Sounding rocket measurements have been a useful tool for directly measuring the ionosphere, however, these types of measurements are infrequent and costly. A more sustainable type of measurement, for characterizing the D-region, is remote sensing with very low frequency (VLF) waves. Both the lower ionosphere and Earth's ground strongly reflect VLF waves. These two spherical reflectors form what is known as the Earth-ionosphere waveguide. As VLF waves propagate within the waveguide, they interact with the D-region ionosphere, causing amplitude and phase changes that are polarization dependent. These changes can be monitored with a spatially distributed array of receivers and D-region properties can be inferred from these measurements. Researchers have previously used VLF remote sensing techniques, from either narrowband transmitters or sferics, to estimate the density profile, but these estimations are typically during a short time frame and over a narrow propagation region. We report on an effort to improve the understanding of VLF wave propagation by estimating the commonly known h' and beta two parameter exponential electron density profile. Measurements from multiple narrowband transmitters at multiple receivers are taken, concurrently, and input into an algorithm. The cornerstone of the algorithm is an artificial neural network (ANN), where input values are the received narrowband amplitude and phase and the outputs are the estimated h' and beta parameters. Training data for the ANN is generated using the Navy's Long-Wavelength Propagation Capability (LWPC) model. Emphasis is placed on profiling the daytime ionosphere, which has a more stable and predictable profile than the nighttime. Daytime ionospheric disturbances, from high solar

  13. State estimation of chemical engineering systems tending to multiple solutions

    Directory of Open Access Journals (Sweden)

    N. P. G. Salau

    2014-09-01

    Full Text Available A well-evaluated state covariance matrix avoids error propagation due to divergence issues and, thereby, it is crucial for a successful state estimator design. In this paper we investigate the performance of the state covariance matrices used in three unconstrained Extended Kalman Filter (EKF formulations and one constrained EKF formulation (CEKF. As benchmark case studies we have chosen: a a batch chemical reactor with reversible reactions whose system model and measurement are such that multiple states satisfy the equilibrium condition and b a CSTR with exothermic irreversible reactions and cooling jacket energy balance whose nonlinear behavior includes multiple steady-states and limit cycles. The results have shown that CEKF is in general the best choice of EKF formulations (even if they are constrained with an ad hoc clipping strategy which avoids undesired states for such case studies. Contrary to a clipped EKF formulation, CEKF incorporates constraints into an optimization problem, which minimizes the noise in a least square sense preventing a bad noise distribution. It is also shown that, although the Moving Horizon Estimation (MHE provides greater robustness to a poor guess of the initial state, converging in less steps to the actual states, it is not justified for our examples due to the high additional computational effort.

  14. Estimation of The Physico-Chemical Parameters in Marine Environment (Yumurtalik Bight- Iskenderun Bay

    Directory of Open Access Journals (Sweden)

    Gökhan Tamer Kayaalp

    2016-02-01

    Full Text Available The study was carried out to estimate the temperature, light intensity, salinity, Dissolved O2 (DO, pH values and the biotic parameter chlorophyll- a in the water column related with the depth. Because, the physico-chemical parameters affect greatly both primary and secondary producers in marine life. For this purpose the physico-chemical properties were determined day and night for 40 meter depth during the eight days. The means were compared by using the analysis of variance method and Duncan’s Multiple Comparison Test. Also physico-chemical parameters were estimated by using the analysis of regression and correlation. The effect of temperature and salinity were found significant according to the result of the analysis of variance during the day. Also the similar results were found for the night. While the effect of the depth on the chloropyll-a a was significant in the night, the effect of the depth on the DO was not significant in the day and night. The correlations among the depth and the parameters were defined. It was found the negative correlation between the depth and the temperature and light intensity. Determination coefficient of the model for salinity was also found different for day time. The correlation values among the depth and the temperature, salinity and pH were found different for the night.

  15. Bayesian inference in genetic parameter estimation of visual scores in Nellore beef-cattle

    Science.gov (United States)

    2009-01-01

    The aim of this study was to estimate the components of variance and genetic parameters for the visual scores which constitute the Morphological Evaluation System (MES), such as body structure (S), precocity (P) and musculature (M) in Nellore beef-cattle at the weaning and yearling stages, by using threshold Bayesian models. The information used for this was gleaned from visual scores of 5,407 animals evaluated at the weaning and 2,649 at the yearling stages. The genetic parameters for visual score traits were estimated through two-trait analysis, using the threshold animal model, with Bayesian statistics methodology and MTGSAM (Multiple Trait Gibbs Sampler for Animal Models) threshold software. Heritability estimates for S, P and M were 0.68, 0.65 and 0.62 (at weaning) and 0.44, 0.38 and 0.32 (at the yearling stage), respectively. Heritability estimates for S, P and M were found to be high, and so it is expected that these traits should respond favorably to direct selection. The visual scores evaluated at the weaning and yearling stages might be used in the composition of new selection indexes, as they presented sufficient genetic variability to promote genetic progress in such morphological traits. PMID:21637450

  16. The Exponent of High-frequency Source Spectral Falloff and Contribution to Source Parameter Estimates

    Science.gov (United States)

    Kiuchi, R.; Mori, J. J.

    2015-12-01

    As a way to understand the characteristics of the earthquake source, studies of source parameters (such as radiated energy and stress drop) and their scaling are important. In order to estimate source parameters reliably, often we must use appropriate source spectrum models and the omega-square model is most frequently used. In this model, the spectrum is flat in lower frequencies and the falloff is proportional to the angular frequency squared. However, Some studies (e.g. Allmann and Shearer, 2009; Yagi et al., 2012) reported that the exponent of the high frequency falloff is other than -2. Therefore, in this study we estimate the source parameters using a spectral model for which the falloff exponent is not fixed. We analyze the mainshock and larger aftershocks of the 2008 Iwate-Miyagi Nairiku earthquake. Firstly, we calculate the P wave and SH wave spectra using empirical Green functions (EGF) to remove the path effect (such as attenuation) and site effect. For the EGF event, we select a smaller earthquake that is highly-correlated with the target event. In order to obtain the stable results, we calculate the spectral ratios using a multitaper spectrum analysis (Prieto et al., 2009). Then we take a geometric mean from multiple stations. Finally, using the obtained spectra ratios, we perform a grid search to determine the high frequency falloffs, as well as corner frequency of both of events. Our results indicate the high frequency falloff exponent is often less than 2.0. We do not observe any regional, focal mechanism, or depth dependencies for the falloff exponent. In addition, our estimated corner frequencies and falloff exponents are consistent between the P wave and SH wave analysis. In our presentation, we show differences in estimated source parameters using a fixed omega-square model and a model allowing variable high-frequency falloff.

  17. Recursive Parameter Identification for Estimating and Displaying Maneuvering Vessel Path

    National Research Council Canada - National Science Library

    Pullard, Stephen

    2003-01-01

    ...). The extended least squares (ELS) parameter identification approach allows the system to be installed on most platforms without prior knowledge of system dynamics provided vessel states are available...

  18. Global High Resolution Sea Surface Flux Parameters From Multiple Satellites

    Science.gov (United States)

    Zhang, H.; Reynolds, R. W.; Shi, L.; Bates, J. J.

    2007-05-01

    Advances in understanding the coupled air-sea system and modeling of the ocean and atmosphere demand increasingly higher resolution data, such as air-sea fluxes of up to 3 hourly and every 50 km. These observational requirements can only be met by utilizing multiple satellite observations. Generation of such high resolution products from multiple-satellite and in-situ observations on an operational basis has been started at the U.S. National Oceanic and Atmospheric Administration (NOAA) National Climatic Data Center. Here we describe a few products that are directly related to the computation of turbulent air-sea fluxes. Sea surface wind speed has been observed from in-situ instruments and multiple satellites, with long-term observations ranging from one satellite in the mid 1987 to six or more satellites since mid 2002. A blended product with a global 0.25° grid and four snapshots per day has been produced for July 1987 to present, using a near Gaussian 3-D (x, y, t) interpolation to minimize aliases. Wind direction has been observed from fewer satellites, thus for the blended high resolution vector winds and wind stresses, the directions are taken from the NCEP Re-analysis 2 (operationally run near real time) for climate consistency. The widely used Reynolds Optimum Interpolation SST analysis has been improved with higher resolutions (daily and 0.25°). The improvements use both infrared and microwave satellite data that are bias-corrected by in- situ observations for the period 1985 to present. The new versions provide very significant improvements in terms of resolving ocean features such as the meandering of the Gulf Stream, the Aghulas Current, the equatorial jets and other fronts. The Ta and Qa retrievals are based on measurements from the AMSU sounder onboard the NOAA satellites. Ta retrieval uses AMSU-A data, while Qa retrieval uses both AMSU-A and AMSU-B observations. The retrieval algorithms are developed using the neural network approach. Training

  19. TRITOX: a multiple parameter evaluation of tritium toxicity

    International Nuclear Information System (INIS)

    Carsten, A.L.

    1982-01-01

    The increased use of nuclear reactors for power generation will lead to the introduction of tritium into the environment. The need for assessing possible immediate and long-term effects of exposure to this tritium led to the development of a broad program directed towards evaluating the possible somatic and genetic effects of continuous exposure to tritiated water (HTO). Among the parameters measured are the genetic, cytogenetic, reproductive efficiency, growth, nonspecific lifetime shortening, bone marrow cellularity and stem cell content, relative biological effectiveness as compared to 137 Cesium gamma exposure, and related biochemical and microdosimetric evaluations. These parameters have been evaluated on animals maintained on HTO at 10 to 100 times the maximum permissible concentration (0.03 - 3.0 μCi/ml) for HTO. Dominant lethal mutations, chromosome aberrations in regenerating liver, increased sister chromatid exchanges in bone marrow and reduction in bone marrow stem cell content have been observed at the higher concentrations. The relative biological effectiveness for HTO ingestion as compared to external 137 Cesium gamma exposures has been found to be between 1 and 2

  20. Single-Channel Blind Estimation of Reverberation Parameters

    DEFF Research Database (Denmark)

    Doire, C.S.J.; Brookes, M. D.; Naylor, P. A.

    2015-01-01

    The reverberation of an acoustic channel can be characterised by two frequency-dependent parameters: the reverberation time and the direct-to-reverberant energy ratio. This paper presents an algorithm for blindly determining these parameters from a single-channel speech signal. The algorithm uses...

  1. Uncertainty of Modal Parameters Estimated by ARMA Models

    DEFF Research Database (Denmark)

    Jensen, Jakob Laigaard; Brincker, Rune; Rytter, Anders

    In this paper the uncertainties of identified modal parameters such as eigenfrequencies and damping ratios are assessed. From the measured response of dynamic excited structures the modal parameters may be identified and provide important structural knowledge. However the uncertainty of the param...

  2. Estimation of source parameters of Chamoli Earthquake, India

    Indian Academy of Sciences (India)

    R. Narasimhan, Krishtel eMaging Solutions

    meter studies, in different parts of the world. Singh et al (1979) and Sharma and Wason (1994, 1995) have calculated source parameters for Himalayan and nearby regions. To the best of this authors' knowledge, the source parameter studies using strong motion data have not been carried out in India so far, though similar ...

  3. Estimating reliability of degraded system based on the probability density evolution with multi-parameter

    Directory of Open Access Journals (Sweden)

    Jiang Ge

    2017-01-01

    Full Text Available System degradation was usually caused by multiple-parameter degradation. The assessment result of system reliability by universal generating function was low accurate when compared with the Monte Carlo simulation. And the probability density function of the system output performance cannot be got. So the reliability assessment method based on the probability density evolution with multi-parameter was presented for complexly degraded system. Firstly, the system output function was founded according to the transitive relation between component parameters and the system output performance. Then, the probability density evolution equation based on the probability conservation principle and the system output function was established. Furthermore, probability distribution characteristics of the system output performance was obtained by solving differential equation. Finally, the reliability of the degraded system was estimated. This method did not need to discrete the performance parameters and can establish continuous probability density function of the system output performance with high calculation efficiency and low cost. Numerical example shows that this method is applicable to evaluate the reliability of multi-parameter degraded system.

  4. A novel optimization approach to estimating kinetic parameters of the enzymatic hydrolysis of corn stover

    Directory of Open Access Journals (Sweden)

    Fenglei Qi

    2016-01-01

    Full Text Available Enzymatic hydrolysis is an integral step in the conversion of lignocellulosic biomass to ethanol. The conversion of cellulose to fermentable sugars in the presence of inhibitors is a complex kinetic problem. In this study, we describe a novel approach to estimating the kinetic parameters underlying this process. This study employs experimental data measuring substrate and enzyme loadings, sugar and acid inhibitions for the production of glucose. Multiple objectives to minimize the difference between model predictions and experimental observations are developed and optimized by adopting multi-objective particle swarm optimization method. Model reliability is assessed by exploring likelihood profile in each parameter space. Compared to previous studies, this approach improved the prediction of sugar yields by reducing the mean squared errors by 34% for glucose and 2.7% for cellobiose, suggesting improved agreement between model predictions and the experimental data. Furthermore, kinetic parameters such as K2IG2, K1IG, K2IG, K1IA, and K3IA are identified as contributors to the model non-identifiability and wide parameter confidence intervals. Model reliability analysis indicates possible ways to reduce model non-identifiability and tighten parameter confidence intervals. These results could help improve the design of lignocellulosic biorefineries by providing higher fidelity predictions of fermentable sugars under inhibitory conditions.

  5. Estimation of the petrophysical parameters of sediments from Chad ...

    African Journals Online (AJOL)

    Porosity was estimated from three methods, and polynomial trends having fits ranging between 0.0604 and 0.478 describe depth - porosity variations. Interpretation of the trends revealed lithology trend that agree with the trends of shaliness. Estimates of average effective porosities of formations favorably compared with ...

  6. Approximate effect of parameter pseudonoise intensity on rate of convergence for EKF parameter estimators. [Extended Kalman Filter

    Science.gov (United States)

    Hill, Bryon K.; Walker, Bruce K.

    1991-01-01

    When using parameter estimation methods based on extended Kalman filter (EKF) theory, it is common practice to assume that the unknown parameter values behave like a random process, such as a random walk, in order to guarantee their identifiability by the filter. The present work is the result of an ongoing effort to quantitatively describe the effect that the assumption of a fictitious noise (called pseudonoise) driving the unknown parameter values has on the parameter estimate convergence rate in filter-based parameter estimators. The initial approach is to examine a first-order system described by one state variable with one parameter to be estimated. The intent is to derive analytical results for this simple system that might offer insight into the effect of the pseudonoise assumption for more complex systems. Such results would make it possible to predict the estimator error convergence behavior as a function of the assumed pseudonoise intensity, and this leads to the natural application of the results to the design of filter-based parameter estimators. The results obtained show that the analytical description of the convergence behavior is very difficult.

  7. ASTROPHYSICAL PRIOR INFORMATION AND GRAVITATIONAL-WAVE PARAMETER ESTIMATION

    International Nuclear Information System (INIS)

    Pankow, Chris; Sampson, Laura; Perri, Leah; Chase, Eve; Coughlin, Scott; Zevin, Michael; Kalogera, Vassiliki

    2017-01-01

    The detection of electromagnetic counterparts to gravitational waves (GWs) has great promise for the investigation of many scientific questions. While it is well known that certain orientation parameters can reduce uncertainty in other related parameters, it was also hoped that the detection of an electromagnetic signal in conjunction with a GW could augment the measurement precision of the mass and spin from the gravitational signal itself. That is, knowledge of the sky location, inclination, and redshift of a binary could break degeneracies between these extrinsic, coordinate-dependent parameters and the physical parameters that are intrinsic to the binary. In this paper, we investigate this issue by assuming perfect knowledge of extrinsic parameters, and assessing the maximal impact of this knowledge on our ability to extract intrinsic parameters. We recover similar gains in extrinsic recovery to earlier work; however, we find only modest improvements in a few intrinsic parameters—namely the primary component’s spin. We thus conclude that, even in the best case, the use of additional information from electromagnetic observations does not improve the measurement of the intrinsic parameters significantly.

  8. ASTROPHYSICAL PRIOR INFORMATION AND GRAVITATIONAL-WAVE PARAMETER ESTIMATION

    Energy Technology Data Exchange (ETDEWEB)

    Pankow, Chris; Sampson, Laura; Perri, Leah; Chase, Eve; Coughlin, Scott; Zevin, Michael; Kalogera, Vassiliki [Center for Interdisciplinary Exploration and Research in Astrophysics (CIERA) and Department of Physics and Astronomy, Northwestern University, 2145 Sheridan Road, Evanston, IL 60208 (United States)

    2017-01-10

    The detection of electromagnetic counterparts to gravitational waves (GWs) has great promise for the investigation of many scientific questions. While it is well known that certain orientation parameters can reduce uncertainty in other related parameters, it was also hoped that the detection of an electromagnetic signal in conjunction with a GW could augment the measurement precision of the mass and spin from the gravitational signal itself. That is, knowledge of the sky location, inclination, and redshift of a binary could break degeneracies between these extrinsic, coordinate-dependent parameters and the physical parameters that are intrinsic to the binary. In this paper, we investigate this issue by assuming perfect knowledge of extrinsic parameters, and assessing the maximal impact of this knowledge on our ability to extract intrinsic parameters. We recover similar gains in extrinsic recovery to earlier work; however, we find only modest improvements in a few intrinsic parameters—namely the primary component’s spin. We thus conclude that, even in the best case, the use of additional information from electromagnetic observations does not improve the measurement of the intrinsic parameters significantly.

  9. Hierarchical parameter estimation of DFIG and drive train system in a wind turbine generator

    Institute of Scientific and Technical Information of China (English)

    Xueping PAN; Ping JU; Feng WU; Yuqing JIN

    2017-01-01

    A new hierarchical parameter estimation method for doubly fed induction generator (DFIG) and drive train system in a wind turbine generator (WTG) is proposed in this paper.Firstly,the parameters of the DFIG and the drive train are estimated locally under different types of disturbances.Secondly,a coordination estimation method is further applied to identify the parameters of the DFIG and the drive train simultaneously with the purpose of attaining the global optimal estimation results.The main benefit of the proposed scheme is the improved estimation accuracy.Estimation results confirm the applicability of the proposed estimation technique.

  10. Uncertainty of Modal Parameters Estimated by ARMA Models

    DEFF Research Database (Denmark)

    Jensen, Jacob Laigaard; Brincker, Rune; Rytter, Anders

    1990-01-01

    In this paper the uncertainties of identified modal parameters such as eidenfrequencies and damping ratios are assed. From the measured response of dynamic excited structures the modal parameters may be identified and provide important structural knowledge. However the uncertainty of the parameters...... by simulation study of a lightly damped single degree of freedom system. Identification by ARMA models has been choosen as system identification method. It is concluded that both the sampling interval and number of sampled points may play a significant role with respect to the statistical errors. Furthermore......, it is shown that the model errors may also contribute significantly to the uncertainty....

  11. Distributed Dynamic State Estimator, Generator Parameter Estimation and Stability Monitoring Demonstration

    Energy Technology Data Exchange (ETDEWEB)

    Meliopoulos, Sakis [Georgia Inst. of Technology, Atlanta, GA (United States); Cokkinides, George [Georgia Inst. of Technology, Atlanta, GA (United States); Fardanesh, Bruce [New York Power Authority, NY (United States); Hedrington, Clinton [U.S. Virgin Islands Water and Power Authority (WAPA), St. Croix (U.S. Virgin Islands)

    2013-12-31

    This is the final report for this project that was performed in the period: October1, 2009 to June 30, 2013. In this project, a fully distributed high-fidelity dynamic state estimator (DSE) that continuously tracks the real time dynamic model of a wide area system with update rates better than 60 times per second is achieved. The proposed technology is based on GPS-synchronized measurements but also utilizes data from all available Intelligent Electronic Devices in the system (numerical relays, digital fault recorders, digital meters, etc.). The distributed state estimator provides the real time model of the system not only the voltage phasors. The proposed system provides the infrastructure for a variety of applications and two very important applications (a) a high fidelity generating unit parameters estimation and (b) an energy function based transient stability monitoring of a wide area electric power system with predictive capability. Also the dynamic distributed state estimation results are stored (the storage scheme includes data and coincidental model) enabling an automatic reconstruction and “play back” of a system wide disturbance. This approach enables complete play back capability with fidelity equal to that of real time with the advantage of “playing back” at a user selected speed. The proposed technologies were developed and tested in the lab during the first 18 months of the project and then demonstrated on two actual systems, the USVI Water and Power Administration system and the New York Power Authority’s Blenheim-Gilboa pumped hydro plant in the last 18 months of the project. The four main thrusts of this project, mentioned above, are extremely important to the industry. The DSE with the achieved update rates (more than 60 times per second) provides a superior solution to the “grid visibility” question. The generator parameter identification method fills an important and practical need of the industry. The “energy function” based

  12. Stability Properties of Network Diversity Multiple Access with Multiple-Antenna Reception and Imperfect Collision Multiplicity Estimation

    Directory of Open Access Journals (Sweden)

    Ramiro Samano-Robles

    2013-01-01

    Full Text Available In NDMA (network diversity multiple access, protocol-controlled retransmissions are used to create a virtual MIMO (multiple-input multiple-output system, where collisions can be resolved via source separation. By using this retransmission diversity approach for collision resolution, NDMA is the family of random access protocols with the highest potential throughput. However, several issues remain open today in the modeling and design of this type of protocol, particularly in terms of dynamic stable performance and backlog delay. This paper attempts to partially fill this gap by proposing a Markov model for the study of the dynamic-stable performance of a symmetrical and non-blind NDMA protocol assisted by a multiple-antenna receiver. The model is useful in the study of stability aspects in terms of the backlog-user distribution and average backlog delay. It also allows for the investigation of the different states of the system and the transition probabilities between them. Unlike previous works, the proposed approach considers the imperfect estimation of the collision multiplicity, which is a crucial process to the performance of NDMA. The results suggest that NDMA improves not only the throughput performance over previous solutions, but also the average number of backlogged users, the average backlog delay and, in general, the stability of random access protocols. It is also shown that when multiuser detection conditions degrade, ALOHA-type backlog retransmission becomes relevant to the stable operation of NDMA.

  13. Identifyability measures to select the parameters to be estimated in a solid-state fermentation distributed parameter model.

    Science.gov (United States)

    da Silveira, Christian L; Mazutti, Marcio A; Salau, Nina P G

    2016-07-08

    Process modeling can lead to of advantages such as helping in process control, reducing process costs and product quality improvement. This work proposes a solid-state fermentation distributed parameter model composed by seven differential equations with seventeen parameters to represent the process. Also, parameters estimation with a parameters identifyability analysis (PIA) is performed to build an accurate model with optimum parameters. Statistical tests were made to verify the model accuracy with the estimated parameters considering different assumptions. The results have shown that the model assuming substrate inhibition better represents the process. It was also shown that eight from the seventeen original model parameters were nonidentifiable and better results were obtained with the removal of these parameters from the estimation procedure. Therefore, PIA can be useful to estimation procedure, since it may reduce the number of parameters that can be evaluated. Further, PIA improved the model results, showing to be an important procedure to be taken. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 32:905-917, 2016. © 2016 American Institute of Chemical Engineers.

  14. Tsunami Prediction and Earthquake Parameters Estimation in the Red Sea

    KAUST Repository

    Sawlan, Zaid A

    2012-01-01

    parameters and topography. This thesis introduces a real-time tsunami forecasting method that combines tsunami model with observations using a hybrid ensemble Kalman filter and ensemble Kalman smoother. The filter is used for state prediction while

  15. Estimation of parameter sensitivities for stochastic reaction networks

    KAUST Repository

    Gupta, Ankit

    2016-01-01

    Quantification of the effects of parameter uncertainty is an important and challenging problem in Systems Biology. We consider this problem in the context of stochastic models of biochemical reaction networks where the dynamics is described as a

  16. A novel parameter estimation method for metal oxide surge arrester ...

    Indian Academy of Sciences (India)

    the program, which is based on MAPSO algorithm and can determine the fitness and parameters .... to solve many optimization problems (Kennedy & Eberhart 1995; Eberhart & Shi 2001; Gaing. 2003 ... describe the content of this concept. V el.

  17. BIASED BEARINGS-ONIKY PARAMETER ESTIMATION FOR BISTATIC SYSTEM

    Institute of Scientific and Technical Information of China (English)

    Xu Benlian; Wang Zhiquan

    2007-01-01

    According to the biased angles provided by the bistatic sensors,the necessary condition of observability and Cramer-Rao low bounds for the bistatic system are derived and analyzed,respectively.Additionally,a dual Kalman filter method is presented with the purpose of eliminating the effect of biased angles on the state variable estimation.Finally,Monte-Carlo simulations are conducted in the observable scenario.Simulation results show that the proposed theory holds true,and the dual Kalman filter method can estimate state variable and biased angles simultaneously.Furthermore,the estimated results can achieve their Cramer-Rao low bounds.

  18. Weibull Parameters Estimation Based on Physics of Failure Model

    DEFF Research Database (Denmark)

    Kostandyan, Erik; Sørensen, John Dalsgaard

    2012-01-01

    Reliability estimation procedures are discussed for the example of fatigue development in solder joints using a physics of failure model. The accumulated damage is estimated based on a physics of failure model, the Rainflow counting algorithm and the Miner’s rule. A threshold model is used...... for degradation modeling and failure criteria determination. The time dependent accumulated damage is assumed linearly proportional to the time dependent degradation level. It is observed that the deterministic accumulated damage at the level of unity closely estimates the characteristic fatigue life of Weibull...

  19. Muon energy estimate through multiple scattering with the MACRO detector

    Energy Technology Data Exchange (ETDEWEB)

    Ambrosio, M.; Antolini, R.; Auriemma, G.; Bakari, D.; Baldini, A.; Barbarino, G.C.; Barish, B.C.; Battistoni, G.; Becherini, Y.; Bellotti, R.; Bemporad, C.; Bernardini, P.; Bilokon, H.; Bloise, C.; Bower, C.; Brigida, M.; Bussino, S.; Cafagna, F.; Calicchio, M.; Campana, D.; Candela, A.; Carboni, M.; Caruso, R.; Cassese, F.; Cecchini, S.; Cei, F.; Chiarella, V.; Choudhary, B.C.; Coutu, S.; Cozzi, M.; De Cataldo, G.; De Deo, M.; Dekhissi, H.; De Marzo, C.; De Mitri, I.; Derkaoui, J.; De Vincenzi, M.; Di Credico, A.; Dincecco, M.; Erriquez, O.; Favuzzi, C.; Forti, C.; Fusco, P.; Giacomelli, G.; Giannini, G.; Giglietto, N.; Giorgini, M.; Grassi, M.; Gray, L.; Grillo, A.; Guarino, F.; Gustavino, C.; Habig, A.; Hanson, K.; Heinz, R.; Iarocci, E.; Katsavounidis, E.; Katsavounidis, I.; Kearns, E.; Kim, H.; Kyriazopoulou, S.; Lamanna, E.; Lane, C.; Levin, D.S.; Lindozzi, M.; Lipari, P.; Longley, N.P.; Longo, M.J.; Loparco, F.; Maaroufi, F.; Mancarella, G.; Mandrioli, G.; Margiotta, A.; Marini, A.; Martello, D.; Marzari-Chiesa, A.; Mazziotta, M.N.; Michael, D.G.; Monacelli, P.; Montaruli, T.; Monteno, M.; Mufson, S.; Musser, J.; Nicolo, D.; Nolty, R.; Orth, C.; Osteria, G.; Palamara, O.; Patera, V.; Patrizii, L.; Pazzi, R.; Peck, C.W.; Perrone, L.; Petrera, S.; Pistilli, P.; Popa, V.; Raino, A.; Reynoldson, J.; Ronga, F.; Rrhioua, A.; Satriano, C.; Scapparone, E. E-mail: eugenio.scapparone@bo.infn.it; Scholberg, K.; Sciubba, A.; Serra, P.; Sioli, M. E-mail: maximiliano.sioli@bo.infn.it; Sirri, G.; Sitta, M.; Spinelli, P.; Spinetti, M.; Spurio, M.; Steinberg, R.; Stone, J.L.; Sulak, L.R.; Surdo, A.; Tarle, G.; Tatananni, E.; Togo, V.; Vakili, M.; Walter, C.W.; Webb, R

    2002-10-21

    Muon energy measurement represents an important issue for any experiment addressing neutrino-induced up-going muon studies. Since the neutrino oscillation probability depends on the neutrino energy, a measurement of the muon energy adds an important piece of information concerning the neutrino system. We show in this paper how the MACRO limited streamer tube system can be operated in drift mode by using the TDCs included in the QTPs, an electronics designed for magnetic monopole search. An improvement of the space resolution is obtained, through an analysis of the multiple scattering of muon tracks as they pass through our detector. This information can be used further to obtain an estimate of the energy of muons crossing the detector. Here we present the results of two dedicated tests, performed at CERN PS-T9 and SPS-X7 beam lines, to provide a full check of the electronics and to exploit the feasibility of such a multiple scattering analysis. We show that by using a neural network approach, we are able to reconstruct the muon energy for E{sub {mu}}<40 GeV. The test beam data provide an absolute energy calibration, which allows us to apply this method to MACRO data.

  20. Muon energy estimate through multiple scattering with the MACRO detector

    CERN Document Server

    Ambrosio, M; Auriemma, G; Bakari, D; Baldini, A; Barbarino, G C; Barish, B C; Battistoni, G; Becherini, Y; Bellotti, R; Bemporad, C; Bernardini, P; Bilokon, H; Bloise, C; Bower, C; Brigida, M; Bussino, S; Cafagna, F; Calicchio, M; Campana, D; Candela, A; Carboni, M; Caruso, R; Cassese, F; Cecchini, S; Cei, F; Chiarella, V; Choudhary, B C; Coutu, S; Cozzi, M; De Cataldo, G; De Deo, M; Dekhissi, H; De Marzo, C; De Mitri, I; Derkaoui, J; De Vincenzi, M; Di Credico, A; Dincecco, M; Erriquez, O; Favuzzi, C; Forti, C; Fusco, P; Giacomelli, G; Giannini, G; Giglietto, N; Giorgini, M; Grassi, M; Gray, L; Grillo, A; Guarino, F; Gustavino, C; Habig, A; Hanson, K; Heinz, R; Iarocci, E; Katsavounidis, E; Katsavounidis, I; Kearns, E; Kim, H; Kyriazopoulou, S; Lamanna, E; Lane, C; Levin, D S; Lindozzi, M; Lipari, P; Longley, N P; Longo, M J; Loparco, F; Maaroufi, F; Mancarella, G; Mandrioli, G; Margiotta, A; Marini, A; Martello, D; Marzari-Chiesa, A; Mazziotta, M N; Michael, D G; Monacelli, P; Montaruli, T; Monteno, M; Mufson, S; Musser, J; Nicolò, D; Nolty, R; Orth, C; Osteria, G; Palamara, O; Patera, V; Patrizii, L; Pazzi, R; Peck, C W; Perrone, L; Petrera, S; Pistilli, P; Popa, V; Rainó, A; Reynoldson, J; Ronga, F; Rrhioua, A; Satriano, C; Scapparone, E; Scholberg, K; Sciubba, A; Serra, P; Sioli, M; Sirri, G; Sitta, M; Spinelli, P; Spinetti, M; Spurio, M; Steinberg, R; Stone, J L; Sulak, L R; Surdo, A; Tarle, G; Tatananni, E; Togo, V; Vakili, M; Walter, C W; Webb, R

    2002-01-01

    Muon energy measurement represents an important issue for any experiment addressing neutrino-induced up-going muon studies. Since the neutrino oscillation probability depends on the neutrino energy, a measurement of the muon energy adds an important piece of information concerning the neutrino system. We show in this paper how the MACRO limited streamer tube system can be operated in drift mode by using the TDCs included in the QTPs, an electronics designed for magnetic monopole search. An improvement of the space resolution is obtained, through an analysis of the multiple scattering of muon tracks as they pass through our detector. This information can be used further to obtain an estimate of the energy of muons crossing the detector. Here we present the results of two dedicated tests, performed at CERN PS-T9 and SPS-X7 beam lines, to provide a full check of the electronics and to exploit the feasibility of such a multiple scattering analysis. We show that by using a neural network approach, we are able to r...

  1. Estimating kinetic mechanisms with prior knowledge I: Linear parameter constraints.

    Science.gov (United States)

    Salari, Autoosa; Navarro, Marco A; Milescu, Mirela; Milescu, Lorin S

    2018-02-05

    To understand how ion channels and other proteins function at the molecular and cellular levels, one must decrypt their kinetic mechanisms. Sophisticated algorithms have been developed that can be used to extract kinetic parameters from a variety of experimental data types. However, formulating models that not only explain new data, but are also consistent with existing knowledge, remains a challenge. Here, we present a two-part study describing a mathematical and computational formalism that can be used to enforce prior knowledge into the model using constraints. In this first part, we focus on constraints that enforce explicit linear relationships involving rate constants or other model parameters. We develop a simple, linear algebra-based transformation that can be applied to enforce many types of model properties and assumptions, such as microscopic reversibility, allosteric gating, and equality and inequality parameter relationships. This transformation converts the set of linearly interdependent model parameters into a reduced set of independent parameters, which can be passed to an automated search engine for model optimization. In the companion article, we introduce a complementary method that can be used to enforce arbitrary parameter relationships and any constraints that quantify the behavior of the model under certain conditions. The procedures described in this study can, in principle, be coupled to any of the existing methods for solving molecular kinetics for ion channels or other proteins. These concepts can be used not only to enforce existing knowledge but also to formulate and test new hypotheses. © 2018 Salari et al.

  2. Stellar atmospheric parameter estimation using Gaussian process regression

    Science.gov (United States)

    Bu, Yude; Pan, Jingchang

    2015-02-01

    As is well known, it is necessary to derive stellar parameters from massive amounts of spectral data automatically and efficiently. However, in traditional automatic methods such as artificial neural networks (ANNs) and kernel regression (KR), it is often difficult to optimize the algorithm structure and determine the optimal algorithm parameters. Gaussian process regression (GPR) is a recently developed method that has been proven to be capable of overcoming these difficulties. Here we apply GPR to derive stellar atmospheric parameters from spectra. Through evaluating the performance of GPR on Sloan Digital Sky Survey (SDSS) spectra, Medium resolution Isaac Newton Telescope Library of Empirical Spectra (MILES) spectra, ELODIE spectra and the spectra of member stars of galactic globular clusters, we conclude that GPR can derive stellar parameters accurately and precisely, especially when we use data preprocessed with principal component analysis (PCA). We then compare the performance of GPR with that of several widely used regression methods (ANNs, support-vector regression and KR) and find that with GPR it is easier to optimize structures and parameters and more efficient and accurate to extract atmospheric parameters.

  3. Retrospective forecast of ETAS model with daily parameters estimate

    Science.gov (United States)

    Falcone, Giuseppe; Murru, Maura; Console, Rodolfo; Marzocchi, Warner; Zhuang, Jiancang

    2016-04-01

    We present a retrospective ETAS (Epidemic Type of Aftershock Sequence) model based on the daily updating of free parameters during the background, the learning and the test phase of a seismic sequence. The idea was born after the 2011 Tohoku-Oki earthquake. The CSEP (Collaboratory for the Study of Earthquake Predictability) Center in Japan provided an appropriate testing benchmark for the five 1-day submitted models. Of all the models, only one was able to successfully predict the number of events that really happened. This result was verified using both the real time and the revised catalogs. The main cause of the failure was in the underestimation of the forecasted events, due to model parameters maintained fixed during the test. Moreover, the absence in the learning catalog of an event similar to the magnitude of the mainshock (M9.0), which drastically changed the seismicity in the area, made the learning parameters not suitable to describe the real seismicity. As an example of this methodological development we show the evolution of the model parameters during the last two strong seismic sequences in Italy: the 2009 L'Aquila and the 2012 Reggio Emilia episodes. The achievement of the model with daily updated parameters is compared with that of same model where the parameters remain fixed during the test time.

  4. Rotating Parabolic-Reflector Antenna Target in SAR Data: Model, Characteristics, and Parameter Estimation

    Directory of Open Access Journals (Sweden)

    Bin Deng

    2013-01-01

    Full Text Available Parabolic-reflector antennas (PRAs, usually possessing rotation, are a particular type of targets of potential interest to the synthetic aperture radar (SAR community. This paper is aimed to investigate PRA’s scattering characteristics and then to extract PRA’s parameters from SAR returns, for supporting image interpretation and target recognition. We at first obtain both closed-form and numeric solutions to PRA’s backscattering by geometrical optics (GO, physical optics, and graphical electromagnetic computation, respectively. Based on the GO solution, a migratory scattering center model is at first presented for representing the movement of the specular point with aspect angle, and then a hybrid model, named the migratory/micromotion scattering center (MMSC model, is proposed for characterizing a rotating PRA in the SAR geometry, which incorporates PRA’s rotation into its migratory scattering center model. Additionally, we in detail analyze PRA’s radar characteristics on radar cross-section, high-resolution range profiles, time-frequency distribution, and 2D images, which also confirm the models proposed. A maximal likelihood estimator is developed for jointly solving the MMSC model for PRA’s multiple parameters by optimization. By exploiting the aforementioned characteristics, the coarse parameter estimation guarantees convergency upon global minima. The signatures recovered can be favorably utilized for SAR image interpretation and target recognition.

  5. Simultaneous identification of unknown groundwater pollution sources and estimation of aquifer parameters

    Science.gov (United States)

    Datta, Bithin; Chakrabarty, Dibakar; Dhar, Anirban

    2009-09-01

    Pollution source identification is a common problem encountered frequently. In absence of prior information about flow and transport parameters, the performance of source identification models depends on the accuracy in estimation of these parameters. A methodology is developed for simultaneous pollution source identification and parameter estimation in groundwater systems. The groundwater flow and transport simulator is linked to the nonlinear optimization model as an external module. The simulator defines the flow and transport processes, and serves as a binding equality constraint. The Jacobian matrix which determines the search direction in the nonlinear optimization model links the groundwater flow-transport simulator and the optimization method. Performance of the proposed methodology using spatiotemporal hydraulic head values and pollutant concentration measurements is evaluated by solving illustrative problems. Two different decision model formulations are developed. The computational efficiency of these models is compared using two nonlinear optimization algorithms. The proposed methodology addresses some of the computational limitations of using the embedded optimization technique which embeds the discretized flow and transport equations as equality constraints for optimization. Solution results obtained are also found to be better than those obtained using the embedded optimization technique. The performance evaluations reported here demonstrate the potential applicability of the developed methodology for a fairly large aquifer study area with multiple unknown pollution sources.

  6. Development of simple kinetic models and parameter estimation for ...

    African Journals Online (AJOL)

    PANCHIGA

    2016-09-28

    Sep 28, 2016 ... estimation for simulation of recombinant human serum albumin ... and recombinant protein production by P. pastoris without requiring complex models. Key words: ..... SDS-PAGE and showed the same molecular size as.

  7. The effect of selection on genetic parameter estimates

    African Journals Online (AJOL)

    Unknown

    The South African Journal of Animal Science is available online at ... A simulation study was carried out to investigate the effect of selection on the estimation of genetic ... The model contained a fixed effect, random genetic and random.

  8. (Co) variance Components and Genetic Parameter Estimates for Re

    African Journals Online (AJOL)

    Mapula

    The magnitude of heritability estimates obtained in the current study ... traits were recently introduced to supplement progeny testing programmes or for usage as sole source of ..... VCE-5 User's Guide and Reference Manual Version 5.1.

  9. Empirical estimation of school siting parameter towards improving children's safety

    Science.gov (United States)

    Aziz, I. S.; Yusoff, Z. M.; Rasam, A. R. A.; Rahman, A. N. N. A.; Omar, D.

    2014-02-01

    Distance from school to home is a key determination in ensuring the safety of hildren. School siting parameters are made to make sure that a particular school is located in a safe environment. School siting parameters are made by Department of Town and Country Planning Malaysia (DTCP) and latest review was on June 2012. These school siting parameters are crucially important as they can affect the safety, school reputation, and not to mention the perception of the pupil and parents of the school. There have been many studies to review school siting parameters since these change in conjunction with this ever-changing world. In this study, the focus is the impact of school siting parameter on people with low income that live in the urban area, specifically in Johor Bahru, Malaysia. In achieving that, this study will use two methods which are on site and off site. The on site method is to give questionnaires to people and off site is to use Geographic Information System (GIS) and Statistical Product and Service Solutions (SPSS), to analyse the results obtained from the questionnaire. The output is a maps of suitable safe distance from school to house. The results of this study will be useful to people with low income as their children tend to walk to school rather than use transportation.

  10. Uncertainties in the Item Parameter Estimates and Robust Automated Test Assembly

    Science.gov (United States)

    Veldkamp, Bernard P.; Matteucci, Mariagiulia; de Jong, Martijn G.

    2013-01-01

    Item response theory parameters have to be estimated, and because of the estimation process, they do have uncertainty in them. In most large-scale testing programs, the parameters are stored in item banks, and automated test assembly algorithms are applied to assemble operational test forms. These algorithms treat item parameters as fixed values,…

  11. Efficient estimates of cochlear hearing loss parameters in individual listeners

    DEFF Research Database (Denmark)

    Fereczkowski, Michal; Jepsen, Morten Løve; Dau, Torsten

    2013-01-01

    It has been suggested that the level corresponding to the knee-point of the basilar membrane (BM) input/output (I/O) function can be used to estimate the amount of inner- and outer hair-cell loss (IHL, OHL) in listeners with a moderate cochlear hearing impairment Plack et al. (2004). According...... to Jepsen and Dau (2011) IHL + OHL = HLT [dB], where HLT stands for total hearing loss. Hence having estimates of the total hearing loss and OHC loss, one can estimate the IHL. In the present study, results from forward masking experiments based on temporal masking curves (TMC; Nelson et al., 2001...... estimates of the knee-point level. Further, it is explored whether it is possible to estimate the compression ratio using only on-frequency TMCs. 10 normal-hearing and 10 hearing-impaired listeners (with mild-to-moderate sensorineural hearing loss) were tested at 1, 2 and 4 kHz. The results showed...

  12. Sugarcane maturity estimation through edaphic-climatic parameters

    Directory of Open Access Journals (Sweden)

    Scarpari Maximiliano Salles

    2004-01-01

    Full Text Available Sugarcane (Saccharum officinarum L. grows under different weather conditions directly affecting crop maturation. Raw material quality predicting models are important tools in sugarcane crop management; the goal of these models is to provide productivity estimates during harvesting, increasing the efficiency of strategical and administrative decisions. The objective of this work was developing a model to predict Total Recoverable Sugars (TRS during harvesting, using data related to production factors such as soil water storage and negative degree-days. The database of a sugar mill for the crop seasons 1999/2000, 2000/2001 and 2001/2002 was analyzed, and statistical models were tested to estimate raw material. The maturity model for a one-year old sugarcane proved to be significant, with a coefficient of determination (R² of 0.7049*. No differences were detected between measured and estimated data in the simulation (P < 0.05.

  13. An Introduction to Goodness of Fit for PMU Parameter Estimation

    Energy Technology Data Exchange (ETDEWEB)

    Riepnieks, Artis; Kirkham, Harold

    2017-10-01

    New results of measurements of phasor-like signals are presented based on our previous work on the topic. In this document an improved estimation method is described. The algorithm (which is realized in MATLAB software) is discussed. We examine the effect of noisy and distorted signals on the Goodness of Fit metric. The estimation method is shown to be performing very well with clean data and with a measurement window as short as a half a cycle and as few as 5 samples per cycle. The Goodness of Fit decreases predictably with added phase noise, and seems to be acceptable even with visible distortion in the signal. While the exact results we obtain are specific to our method of estimation, the Goodness of Fit method could be implemented in any phasor measurement unit.

  14. Response-Based Estimation of Sea State Parameters

    DEFF Research Database (Denmark)

    Nielsen, Ulrik Dam

    2007-01-01

    of measured ship responses. It is therefore interesting to investigate how the filtering aspect, introduced by FRF, affects the final outcome of the estimation procedures. The paper contains a study based on numerical generated time series, and the study shows that filtering has an influence...... calculated by a 3-D time domain code and by closed-form (analytical) expressions, respectively. Based on comparisons with wave radar measurements and satellite measurements it is seen that the wave estimations based on closedform expressions exhibit a reasonable energy content, but the distribution of energy...

  15. Application of Parameter Estimation for Diffusions and Mixture Models

    DEFF Research Database (Denmark)

    Nolsøe, Kim

    The first part of this thesis proposes a method to determine the preferred number of structures, their proportions and the corresponding geometrical shapes of an m-membered ring molecule. This is obtained by formulating a statistical model for the data and constructing an algorithm which samples...... with the posterior score function. From an application point of view this methology is easy to apply, since the optimal estimating function G(;Xt1 ; : : : ;Xtn ) is equal to the classical optimal estimating function, plus a correction term which takes into account the prior information. The methology is particularly...

  16. Estimation of beech pyrolysis kinetic parameters by Shuffled Complex Evolution.

    Science.gov (United States)

    Ding, Yanming; Wang, Changjian; Chaos, Marcos; Chen, Ruiyu; Lu, Shouxiang

    2016-01-01

    The pyrolysis kinetics of a typical biomass energy feedstock, beech, was investigated based on thermogravimetric analysis over a wide heating rate range from 5K/min to 80K/min. A three-component (corresponding to hemicellulose, cellulose and lignin) parallel decomposition reaction scheme was applied to describe the experimental data. The resulting kinetic reaction model was coupled to an evolutionary optimization algorithm (Shuffled Complex Evolution, SCE) to obtain model parameters. To the authors' knowledge, this is the first study in which SCE has been used in the context of thermogravimetry. The kinetic parameters were simultaneously optimized against data for 10, 20 and 60K/min heating rates, providing excellent fits to experimental data. Furthermore, it was shown that the optimized parameters were applicable to heating rates (5 and 80K/min) beyond those used to generate them. Finally, the predicted results based on optimized parameters were contrasted with those based on the literature. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. LIKELIHOOD ESTIMATION OF PARAMETERS USING SIMULTANEOUSLY MONITORED PROCESSES

    DEFF Research Database (Denmark)

    Friis-Hansen, Peter; Ditlevsen, Ove Dalager

    2004-01-01

    The topic is maximum likelihood inference from several simultaneously monitored response processes of a structure to obtain knowledge about the parameters of other not monitored but important response processes when the structure is subject to some Gaussian load field in space and time. The consi....... The considered example is a ship sailing with a given speed through a Gaussian wave field....

  18. parameter extraction and estimation based on the pv panel outdoor

    African Journals Online (AJOL)

    userpc

    The five parameters in Equation (1) depend on the incident solar irradiance, the cell temperature, and on their reference values. These reference values are generally provided by manufacturers of PV modules for specified operating condition such as STC (Standard Test Conditions) for which the irradiance is 1000 and the.

  19. Unconstrained parameter estimation for assessment of dynamic cerebral autoregulation

    International Nuclear Information System (INIS)

    Chacón, M; Nuñez, N; Henríquez, C; Panerai, R B

    2008-01-01

    Measurement of dynamic cerebral autoregulation (CA), the transient response of cerebral blood flow (CBF) to changes in arterial blood pressure (ABP), has been performed with an index of autoregulation (ARI), related to the parameters of a second-order differential equation model, namely gain (K), damping factor (D) and time constant (T). Limitations of the ARI were addressed by increasing its numerical resolution and generalizing the parameter space. In 16 healthy subjects, recordings of ABP (Finapres) and CBF velocity (ultrasound Doppler) were performed at rest, before, during and after 5% CO 2 breathing, and for six repeated thigh cuff maneuvers. The unconstrained model produced lower predictive error (p < 0.001) than the original model. Unconstrained parameters (K'–D'–T') were significantly different from K–D–T but were still sensitive to different measurement conditions, such as the under-regulation induced by hypercapnia. The intra-subject variability of K' was significantly lower than that of the ARI and this parameter did not show the unexpected occurrences of zero values as observed with the ARI and the classical value of K. These results suggest that K' could be considered as a more stable and reliable index of dynamic autoregulation than ARI. Further studies are needed to validate this new index under different clinical conditions

  20. Measurement Error Estimation for Capacitive Voltage Transformer by Insulation Parameters

    Directory of Open Access Journals (Sweden)

    Bin Chen

    2017-03-01

    Full Text Available Measurement errors of a capacitive voltage transformer (CVT are relevant to its equivalent parameters for which its capacitive divider contributes the most. In daily operation, dielectric aging, moisture, dielectric breakdown, etc., it will exert mixing effects on a capacitive divider’s insulation characteristics, leading to fluctuation in equivalent parameters which result in the measurement error. This paper proposes an equivalent circuit model to represent a CVT which incorporates insulation characteristics of a capacitive divider. After software simulation and laboratory experiments, the relationship between measurement errors and insulation parameters is obtained. It indicates that variation of insulation parameters in a CVT will cause a reasonable measurement error. From field tests and calculation, equivalent capacitance mainly affects magnitude error, while dielectric loss mainly affects phase error. As capacitance changes 0.2%, magnitude error can reach −0.2%. As dielectric loss factor changes 0.2%, phase error can reach 5′. An increase of equivalent capacitance and dielectric loss factor in the high-voltage capacitor will cause a positive real power measurement error. An increase of equivalent capacitance and dielectric loss factor in the low-voltage capacitor will cause a negative real power measurement error.

  1. Estimation of Aerodynamic Parameters in Conditions of Measurement

    Directory of Open Access Journals (Sweden)

    Htang Om Moung

    2017-01-01

    Full Text Available The paper discusses the problem of aircraft parameter identification in conditions of measurement noises. It is assumed that all the signals involved into the process of identification are subjects to measurement noises, that is measurement random errors normally distributed. The results of simulation are presented which show the relation between the noises standard deviations and the accuracy of identification.

  2. A general method of estimating stellar astrophysical parameters from photometry

    NARCIS (Netherlands)

    Belikov, A. N.; Roeser, S.

    2008-01-01

    Context. Applying photometric catalogs to the study of the population of the Galaxy is obscured by the impossibility to map directly photometric colors into astrophysical parameters. Most of all-sky catalogs like ASCC or 2MASS are based upon broad-band photometric systems, and the use of broad

  3. Hierarchical Bayesian parameter estimation for cumulative prospect theory

    NARCIS (Netherlands)

    Nilsson, H.; Rieskamp, J.; Wagenmakers, E.-J.

    2011-01-01

    Cumulative prospect theory (CPT Tversky & Kahneman, 1992) has provided one of the most influential accounts of how people make decisions under risk. CPT is a formal model with parameters that quantify psychological processes such as loss aversion, subjective values of gains and losses, and

  4. Efficient three-dimensional reconstruction of aquatic vegetation geometry: Estimating morphological parameters influencing hydrodynamic drag

    Science.gov (United States)

    Liénard, Jean; Lynn, Kendra; Strigul, Nikolay; Norris, Benjamin K.; Gatziolis, Demetrios; Mullarney, Julia C.; Bryan, Karin, R.; Henderson, Stephen M.

    2016-09-01

    Aquatic vegetation can shelter coastlines from energetic waves and tidal currents, sometimes enabling accretion of fine sediments. Simulation of flow and sediment transport within submerged canopies requires quantification of vegetation geometry. However, field surveys used to determine vegetation geometry can be limited by the time required to obtain conventional caliper and ruler measurements. Building on recent progress in photogrammetry and computer vision, we present a method for reconstructing three-dimensional canopy geometry. The method was used to survey a dense canopy of aerial mangrove roots, called pneumatophores, in Vietnam's Mekong River Delta. Photogrammetric estimation of geometry required 1) taking numerous photographs at low tide from multiple viewpoints around 1 m2 quadrats, 2) computing relative camera locations and orientations by triangulation of key features present in multiple images and reconstructing a dense 3D point cloud, and 3) extracting pneumatophore locations and diameters from the point cloud data. Step 3) was accomplished by a new 'sector-slice' algorithm, yielding geometric parameters every 5 mm along a vertical profile. Photogrammetric analysis was compared with manual caliper measurements. In all 5 quadrats considered, agreement was found between manual and photogrammetric estimates of stem number, and of number × mean diameter, which is a key parameter appearing in hydrodynamic models. In two quadrats, pneumatophores were encrusted with numerous barnacles, generating a complex geometry not resolved by hand measurements. In remaining cases, moderate agreement between manual and photogrammetric estimates of stem diameter and solid volume fraction was found. By substantially reducing measurement time in the field while capturing in greater detail the 3D structure, photogrammetry has potential to improve input to hydrodynamic models, particularly for simulations of flow through large-scale, heterogenous canopies.

  5. Parameter estimation in stochastic mammogram model by heuristic optimization techniques.

    NARCIS (Netherlands)

    Selvan, S.E.; Xavier, C.C.; Karssemeijer, N.; Sequeira, J.; Cherian, R.A.; Dhala, B.Y.

    2006-01-01

    The appearance of disproportionately large amounts of high-density breast parenchyma in mammograms has been found to be a strong indicator of the risk of developing breast cancer. Hence, the breast density model is popular for risk estimation or for monitoring breast density change in prevention or

  6. EVALUATING SOIL EROSION PARAMETER ESTIMATES FROM DIFFERENT DATA SOURCES

    Science.gov (United States)

    Topographic factors and soil loss estimates that were derived from thee data sources (STATSGO, 30-m DEM, and 3-arc second DEM) were compared. Slope magnitudes derived from the three data sources were consistently different. Slopes from the DEMs tended to provide a flattened sur...

  7. Online Parameter Estimation for a Centrifugal Decanter System

    DEFF Research Database (Denmark)

    Larsen, Jesper Abildgaard; Alstrøm, Preben

    2014-01-01

    In many processing plants decanter systems are used for separation of heterogenious mixtures, and even though they account for a large fraction of the energy consumption, most decanters just runs at a fixed setpoint. Here, multi model estimation is applied to a waste water treatment plant, and it...

  8. Estimates of selection parameters in protein mutants of spring barley

    International Nuclear Information System (INIS)

    Gaul, H.; Walther, H.; Seibold, K.H.; Brunner, H.; Mikaelsen, K.

    1976-01-01

    Detailed studies have been made with induced protein mutants regarding a possible genetic advance in selection including the estimation of the genetic variation and heritability coefficients. Estimates were obtained for protein content and protein yield. The variation of mutant lines in different environments was found to be many times as large as the variation of the line means. The detection of improved protein mutants seems therefore possible only in trials with more than one environment. The heritability of protein content and protein yield was estimated in different sets of environments and was found to be low. However, higher values were found with an increasing number of environments. At least four environments seem to be necessary to obtain reliable heritability estimates. The geneticall component of the variation between lines was significant for protein content in all environmental combinations. For protein yield some environmental combinations only showed significant differences. The expected genetic advance with one selection step was small for both protein traits. Genetically significant differences between protein micromutants give, however, a first indication that selection among protein mutants with small differences seems also possible. (author)

  9. On Structure, Family and Parameter Estimation of Hierarchical Archimedean Copulas

    Czech Academy of Sciences Publication Activity Database

    Górecki, J.; Hofert, M.; Holeňa, Martin

    2017-01-01

    Roč. 87, č. 17 (2017), s. 3261-3324 ISSN 0094-9655 R&D Projects: GA ČR GA17-01251S Institutional support: RVO:67985807 Keywords : copula estimation * goodness-of-fit * Hierarchical Archimedean copula * structure determination Subject RIV: IN - Informatics, Computer Science OBOR OECD: Statistics and probability Impact factor: 0.757, year: 2016

  10. Estimation of reservoir parameter using a hybrid neural network

    Energy Technology Data Exchange (ETDEWEB)

    Aminzadeh, F. [FACT, Suite 201-225, 1401 S.W. FWY Sugarland, TX (United States); Barhen, J.; Glover, C.W. [Center for Engineering Systems Advanced Research, Oak Ridge National Laboratory, Oak Ridge, TN (United States); Toomarian, N.B. [Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA (United States)

    1999-11-01

    Estimation of an oil field's reservoir properties using seismic data is a crucial issue. The accuracy of those estimates and the associated uncertainty are also important information. This paper demonstrates the use of the k-fold cross validation technique to obtain confidence bound on an Artificial Neural Network's (ANN) accuracy statistic from a finite sample set. In addition, we also show that an ANN's classification accuracy is dramatically improved by transforming the ANN's input feature space to a dimensionally smaller, new input space. The new input space represents a feature space that maximizes the linear separation between classes. Thus, the ANN's convergence time and accuracy are improved because the ANN must merely find nonlinear perturbations to the starting linear decision boundaries. These technique for estimating ANN accuracy bounds and feature space transformations are demonstrated on the problem of estimating the sand thickness in an oil field reservoir based only on remotely sensed seismic data.

  11. Parameter Estimation and Model Selection for Mixtures of Truncated Exponentials

    DEFF Research Database (Denmark)

    Langseth, Helge; Nielsen, Thomas Dyhre; Rumí, Rafael

    2010-01-01

    Bayesian networks with mixtures of truncated exponentials (MTEs) support efficient inference algorithms and provide a flexible way of modeling hybrid domains (domains containing both discrete and continuous variables). On the other hand, estimating an MTE from data has turned out to be a difficul...

  12. Parameters estimation for X-ray sources: positions

    International Nuclear Information System (INIS)

    Avni, Y.

    1977-01-01

    It is shown that the sizes of the positional error boxes for x-ray sources can be determined by using an estimation method which we have previously formulated generally and applied in spectral analyses. It is explained how this method can be used by scanning x-ray telescopes, by rotating modulation collimators, and by HEAO-A (author)

  13. Parameter estimation of electricity spot models from futures prices

    NARCIS (Netherlands)

    Aihara, ShinIchi; Bagchi, Arunabha; Imreizeeq, E.S.N.; Walter, E.

    We consider a slight perturbation of the Schwartz-Smith model for the electricity futures prices and the resulting modified spot model. Using the martingale property of the modified price under the risk neutral measure, we derive the arbitrage free model for the spot and futures prices. We estimate

  14. Estimation of fracture parameters using elastic full-waveform inversion

    KAUST Repository

    Zhang, Zhendong; Alkhalifah, Tariq Ali; Oh, Juwon; Tsvankin, Ilya

    2017-01-01

    regularization term is added to the objective function to improve the estimation of the fracture azimuth, which is otherwise poorly constrained. The cracks are assumed to be penny-shaped to reduce the nonuniqueness in the inverted fracture weaknesses and achieve

  15. Estimating parameters of speciation models based on refined summaries of the joint site-frequency spectrum.

    Directory of Open Access Journals (Sweden)

    Aurélien Tellier

    Full Text Available Understanding the processes and conditions under which populations diverge to give rise to distinct species is a central question in evolutionary biology. Since recently diverged populations have high levels of shared polymorphisms, it is challenging to distinguish between recent divergence with no (or very low inter-population gene flow and older splitting events with subsequent gene flow. Recently published methods to infer speciation parameters under the isolation-migration framework are based on summarizing polymorphism data at multiple loci in two species using the joint site-frequency spectrum (JSFS. We have developed two improvements of these methods based on a more extensive use of the JSFS classes of polymorphisms for species with high intra-locus recombination rates. First, using a likelihood based method, we demonstrate that taking into account low-frequency polymorphisms shared between species significantly improves the joint estimation of the divergence time and gene flow between species. Second, we introduce a local linear regression algorithm that considerably reduces the computational time and allows for the estimation of unequal rates of gene flow between species. We also investigate which summary statistics from the JSFS allow the greatest estimation accuracy for divergence time and migration rates for low (around 10 and high (around 100 numbers of loci. Focusing on cases with low numbers of loci and high intra-locus recombination rates we show that our methods for the estimation of divergence time and migration rates are more precise than existing approaches.

  16. Logistic Regression with Multiple Random Effects: A Simulation Study of Estimation Methods and Statistical Packages

    Science.gov (United States)

    Kim, Yoonsang; Emery, Sherry

    2013-01-01

    Several statistical packages are capable of estimating generalized linear mixed models and these packages provide one or more of three estimation methods: penalized quasi-likelihood, Laplace, and Gauss-Hermite. Many studies have investigated these methods’ performance for the mixed-effects logistic regression model. However, the authors focused on models with one or two random effects and assumed a simple covariance structure between them, which may not be realistic. When there are multiple correlated random effects in a model, the computation becomes intensive, and often an algorithm fails to converge. Moreover, in our analysis of smoking status and exposure to anti-tobacco advertisements, we have observed that when a model included multiple random effects, parameter estimates varied considerably from one statistical package to another even when using the same estimation method. This article presents a comprehensive review of the advantages and disadvantages of each estimation method. In addition, we compare the performances of the three methods across statistical packages via simulation, which involves two- and three-level logistic regression models with at least three correlated random effects. We apply our findings to a real dataset. Our results suggest that two packages—SAS GLIMMIX Laplace and SuperMix Gaussian quadrature—perform well in terms of accuracy, precision, convergence rates, and computing speed. We also discuss the strengths and weaknesses of the two packages in regard to sample sizes. PMID:24288415

  17. Logistic Regression with Multiple Random Effects: A Simulation Study of Estimation Methods and Statistical Packages.

    Science.gov (United States)

    Kim, Yoonsang; Choi, Young-Ku; Emery, Sherry

    2013-08-01

    Several statistical packages are capable of estimating generalized linear mixed models and these packages provide one or more of three estimation methods: penalized quasi-likelihood, Laplace, and Gauss-Hermite. Many studies have investigated these methods' performance for the mixed-effects logistic regression model. However, the authors focused on models with one or two random effects and assumed a simple covariance structure between them, which may not be realistic. When there are multiple correlated random effects in a model, the computation becomes intensive, and often an algorithm fails to converge. Moreover, in our analysis of smoking status and exposure to anti-tobacco advertisements, we have observed that when a model included multiple random effects, parameter estimates varied considerably from one statistical package to another even when using the same estimation method. This article presents a comprehensive review of the advantages and disadvantages of each estimation method. In addition, we compare the performances of the three methods across statistical packages via simulation, which involves two- and three-level logistic regression models with at least three correlated random effects. We apply our findings to a real dataset. Our results suggest that two packages-SAS GLIMMIX Laplace and SuperMix Gaussian quadrature-perform well in terms of accuracy, precision, convergence rates, and computing speed. We also discuss the strengths and weaknesses of the two packages in regard to sample sizes.

  18. Lithium-Ion Battery Online Rapid State-of-Power Estimation under Multiple Constraints

    Directory of Open Access Journals (Sweden)

    Shun Xiang

    2018-01-01

    Full Text Available The paper aims to realize a rapid online estimation of the state-of-power (SOP with multiple constraints of a lithium-ion battery. Firstly, based on the improved first-order resistance-capacitance (RC model with one-state hysteresis, a linear state-space battery model is built; then, using the dual extended Kalman filtering (DEKF method, the battery parameters and states, including open-circuit voltage (OCV, are estimated. Secondly, by employing the estimated OCV as the observed value to build the second dual Kalman filters, the battery SOC is estimated. Thirdly, a novel rapid-calculating peak power/SOP method with multiple constraints is proposed in which, according to the bisection judgment method, the battery’s peak state is determined; then, one or two instantaneous peak powers are used to determine the peak power during T seconds. In addition, in the battery operating process, the actual constraint that the battery is under is analyzed specifically. Finally, three simplified versions of the Federal Urban Driving Schedule (SFUDS with inserted pulse experiments are conducted to verify the effectiveness and accuracy of the proposed online SOP estimation method.

  19. Asymptotic analysis of the role of spatial sampling for covariance parameter estimation of Gaussian processes

    International Nuclear Information System (INIS)

    Bachoc, Francois

    2014-01-01

    Covariance parameter estimation of Gaussian processes is analyzed in an asymptotic framework. The spatial sampling is a randomly perturbed regular grid and its deviation from the perfect regular grid is controlled by a single scalar regularity parameter. Consistency and asymptotic normality are proved for the Maximum Likelihood and Cross Validation estimators of the covariance parameters. The asymptotic covariance matrices of the covariance parameter estimators are deterministic functions of the regularity parameter. By means of an exhaustive study of the asymptotic covariance matrices, it is shown that the estimation is improved when the regular grid is strongly perturbed. Hence, an asymptotic confirmation is given to the commonly admitted fact that using groups of observation points with small spacing is beneficial to covariance function estimation. Finally, the prediction error, using a consistent estimator of the covariance parameters, is analyzed in detail. (authors)

  20. Estimation of Soil Moisture Under Vegetation Cover at Multiple Frequencies

    Science.gov (United States)

    Jadghuber, Thomas; Hajnsek, Irena; Weiß, Thomas; Papathanassiou, Konstantinos P.

    2015-04-01

    Soil moisture under vegetation cover was estimated by a polarimetric, iterative, generalized, hybrid decomposition and inversion approach at multiple frequencies (X-, C- and L-band). Therefore the algorithm, originally designed for longer wavelength (L-band), was adapted to deal with the short wavelength scattering scenarios of X- and C-band. The Integral Equation Method (IEM) was incorporated together with a pedo-transfer function of Dobson et al. to account for the peculiarities of short wavelength scattering at X- and C-band. DLR's F-SAR system acquired fully polarimetric SAR data in X-, C- and L-band over the Wallerfing test site in Lower Bavaria, Germany in 2014. Simultaneously, soil and vegetation measurements were conducted on different agricultural test fields. The results indicate a spatially continuous inversion of soil moisture in all three frequencies (inversion rates >92%), mainly due to the careful adaption of the vegetation volume removal including a physical constraining of the decomposition algorithm. However, for X- and C-band the inversion results reveal moisture pattern inconsistencies and in some cases an incorrectly high inversion of soil moisture at X-band. The validation with in situ measurements states a stable performance of 2.1- 7.6vol.% at L-band for the entire growing period. At C- and X-band a reliable performance of 3.7-13.4vol.% in RMSE can only be achieved after distinct filtering (X- band) leading to a loss of almost 60% in spatial inversion rate. Hence, a robust inversion for soil moisture estimation under vegetation cover can only be conducted at L-band due to a constant availability of the soil signal in contrast to higher frequencies (X- and C-band).

  1. Uncertainty estimation of core safety parameters using cross-correlations of covariance matrix

    International Nuclear Information System (INIS)

    Yamamoto, A.; Yasue, Y.; Endo, T.; Kodama, Y.; Ohoka, Y.; Tatsumi, M.

    2012-01-01

    An uncertainty estimation method for core safety parameters, for which measurement values are not obtained, is proposed. We empirically recognize the correlations among the prediction errors among core safety parameters, e.g., a correlation between the control rod worth and assembly relative power of corresponding position. Correlations of uncertainties among core safety parameters are theoretically estimated using the covariance of cross sections and sensitivity coefficients for core parameters. The estimated correlations among core safety parameters are verified through the direct Monte-Carlo sampling method. Once the correlation of uncertainties among core safety parameters is known, we can estimate the uncertainty of a safety parameter for which measurement value is not obtained. Furthermore, the correlations can be also used for the reduction of uncertainties of core safety parameters. (authors)

  2. Nonparametric Estimation of Regression Parameters in Measurement Error Models

    Czech Academy of Sciences Publication Activity Database

    Ehsanes Saleh, A.K.M.D.; Picek, J.; Kalina, Jan

    2009-01-01

    Roč. 67, č. 2 (2009), s. 177-200 ISSN 0026-1424 Grant - others:GA AV ČR(CZ) IAA101120801; GA MŠk(CZ) LC06024 Institutional research plan: CEZ:AV0Z10300504 Keywords : asymptotic relative efficiency(ARE) * asymptotic theory * emaculate mode * Me model * R-estimation * Reliabilty ratio(RR) Subject RIV: BB - Applied Statistics, Operational Research

  3. Measurement-Based Transmission Line Parameter Estimation with Adaptive Data Selection Scheme

    DEFF Research Database (Denmark)

    Li, Changgang; Zhang, Yaping; Zhang, Hengxu

    2017-01-01

    Accurate parameters of transmission lines are critical for power system operation and control decision making. Transmission line parameter estimation based on measured data is an effective way to enhance the validity of the parameters. This paper proposes a multi-point transmission line parameter...

  4. Marker-based estimation of genetic parameters in genomics.

    Directory of Open Access Journals (Sweden)

    Zhiqiu Hu

    Full Text Available Linear mixed model (LMM analysis has been recently used extensively for estimating additive genetic variances and narrow-sense heritability in many genomic studies. While the LMM analysis is computationally less intensive than the Bayesian algorithms, it remains infeasible for large-scale genomic data sets. In this paper, we advocate the use of a statistical procedure known as symmetric differences squared (SDS as it may serve as a viable alternative when the LMM methods have difficulty or fail to work with large datasets. The SDS procedure is a general and computationally simple method based only on the least squares regression analysis. We carry out computer simulations and empirical analyses to compare the SDS procedure with two commonly used LMM-based procedures. Our results show that the SDS method is not as good as the LMM methods for small data sets, but it becomes progressively better and can match well with the precision of estimation by the LMM methods for data sets with large sample sizes. Its major advantage is that with larger and larger samples, it continues to work with the increasing precision of estimation while the commonly used LMM methods are no longer able to work under our current typical computing capacity. Thus, these results suggest that the SDS method can serve as a viable alternative particularly when analyzing 'big' genomic data sets.

  5. Reservoir parameter estimation using a hybrid neural network

    Energy Technology Data Exchange (ETDEWEB)

    Aminzadeh, F. [DGB USA and FACT Inc., Sugarland, TX (United States); Barhen, J.; Glover, C.W. [Oak Ridge National Laboratory (United States). Center for Engineering Systems Advanced Resesarch; Toomarian, N.B. [California Institute of Technology (United States). Jet Propulsion Laboratory

    2000-10-01

    The accuracy of an artificial neural network (ANN) algorithm is a crucial issue in the estimation of an oil field's reservoir properties from the log and seismic data. This paper demonstrates the use of the k-fold cross validation technique to obtain confidence bounds on an ANN's accuracy statistic from a finite sample set. In addition, we also show that an ANN's classification accuracy is dramatically improved by transforming the ANN's input feature space to a dimensionally smaller new input space. The new input space represents a feature space that maximizes the linear separation between classes. Thus, the ANN's convergence time and accuracy are improved because the ANN must merely find nonlinear perturbations to the starting linear decision boundaries. These techniques for estimating ANN accuracy bounds and feature space transformations are demonstrated on the problem of estimating the sand thickness in an oil field reservoir based only on remotely sensed seismic data. (author)

  6. Parameter estimation in a simple stochastic differential equation for phytoplankton modelling

    DEFF Research Database (Denmark)

    Møller, Jan Kloppenborg; Madsen, Henrik; Carstensen, Jacob

    2011-01-01

    The use of stochastic differential equations (SDEs) for simulation of aquatic ecosystems has attracted increasing attention in recent years. The SDE setting also provides the opportunity for statistical estimation of ecosystem parameters. We present an estimation procedure, based on Kalman...

  7. Time-course window estimator for ordinary differential equations linear in the parameters

    NARCIS (Netherlands)

    Vujacic, Ivan; Dattner, Itai; Gonzalez, Javier; Wit, Ernst

    In many applications obtaining ordinary differential equation descriptions of dynamic processes is scientifically important. In both, Bayesian and likelihood approaches for estimating parameters of ordinary differential equations, the speed and the convergence of the estimation procedure may

  8. A new unbiased stochastic derivative estimator for discontinuous sample performances with structural parameters

    NARCIS (Netherlands)

    Peng, Yijie; Fu, Michael C.; Hu, Jian Qiang; Heidergott, Bernd

    In this paper, we propose a new unbiased stochastic derivative estimator in a framework that can handle discontinuous sample performances with structural parameters. This work extends the three most popular unbiased stochastic derivative estimators: (1) infinitesimal perturbation analysis (IPA), (2)

  9. ADAPTIVE PARAMETER ESTIMATION OF PERSON RECOGNITION MODEL IN A STOCHASTIC HUMAN TRACKING PROCESS

    OpenAIRE

    W. Nakanishi; T. Fuse; T. Ishikawa

    2015-01-01

    This paper aims at an estimation of parameters of person recognition models using a sequential Bayesian filtering method. In many human tracking method, any parameters of models used for recognize the same person in successive frames are usually set in advance of human tracking process. In real situation these parameters may change according to situation of observation and difficulty level of human position prediction. Thus in this paper we formulate an adaptive parameter estimation ...

  10. On the estimation of water pure compound parameters in association theories

    DEFF Research Database (Denmark)

    Grenner, Andreas; Kontogeorgis, Georgios; Michelsen, Michael Locht

    2007-01-01

    Determination of the appropriate number of association sites and estimation of parameters for association (SAFT-type) theories is not a trivial matter. Building further on a recently published manuscript by Clark et al., this work investigates aspects of the parameter estimation for water using t...... different association theories. Their performance for various properties as well as against the results presented earlier is demonstrated.......Determination of the appropriate number of association sites and estimation of parameters for association (SAFT-type) theories is not a trivial matter. Building further on a recently published manuscript by Clark et al., this work investigates aspects of the parameter estimation for water using two...

  11. Using multiplicity as a fractional cross-section estimation for centrality in PHOBOS

    Science.gov (United States)

    Hollis, Richard S.; Back, B. B.; Baker, M. D.; Ballintijn, M.; Barton, D. S.; Betts, R. R.; Bickley, A. A.; Bindel, R.; Busza, W.; Carroll, A.; Chai, Z.; Decowski, M. P.; García, E.; Gburek, T.; George, N.; Gulbrandsen, K.; Halliwell, C.; Hamblen, J.; Hauer, M.; Henderson, C.; Hofman, D. J.; Hollis, R. S.; Holylnski, R.; Holzman, B.; Iordanova, A.; Johnson, E.; Kane, J. L.; Khan, N.; Kulinich, P.; Kuo, C. M.; Lin, W. T.; Manly, S.; Mignerey, A. C.; Nouicer, R.; Olszewski, A.; Pak, R.; Reed, C.; Roland, C.; Roland, G.; Sagerer, J.; Seals, H.; Sedykh, I.; Smith, C. E.; Stankiewicz, M. A.; Steinberg, P.; Stephans, G. S. F.; Sukhanov, A.; Tonjes, M. B.; Trzupek, A.; Vale, C.; van Nieuwenhuizen, G. J.; Vaurynovich, S. S.; Verdier, R.; Veres, G. I.; Wenger, E.; Wolfs, F. L. H.; Wosiek, B.; Wozniak, K.; Wyslouch, B.; PHOBOS Collaboration

    2005-01-01

    Collision centrality is a valuable parameter used in relativistic nuclear physics which relates to geometrical quantities such as the number of participating nucleons. PHOBOS utilizes a multiplicity measurement as a means to estimate fractional cross-section of a collision event-by-event. From this, the centrality of this collision can be deduced. The details of the centrality determination depend both on the collision system and collision energy. Presented here are the techniques developed over the course of the RHIC program that are used by PHOBOS to extract the centrality. Possible biases that have to be overcome before a final measurement can be interpreted are discussed.

  12. Estimation of atomic interaction parameters by photon counting

    DEFF Research Database (Denmark)

    Kiilerich, Alexander Holm; Mølmer, Klaus

    2014-01-01

    Detection of radiation signals is at the heart of precision metrology and sensing. In this article we show how the fluctuations in photon counting signals can be exploited to optimally extract information about the physical parameters that govern the dynamics of the emitter. For a simple two......-level emitter subject to photon counting, we show that the Fisher information and the Cram\\'er- Rao sensitivity bound based on the full detection record can be evaluated from the waiting time distribution in the fluorescence signal which can, in turn, be calculated for both perfect and imperfect detectors...

  13. Parameter estimation via conditional expectation: a Bayesian inversion

    KAUST Repository

    Matthies, Hermann G.; Zander, Elmar; Rosić, Bojana V.; Litvinenko, Alexander

    2016-01-01

    When a mathematical or computational model is used to analyse some system, it is usual that some parameters resp. functions or fields in the model are not known, and hence uncertain. These parametric quantities are then identified by actual observations of the response of the real system. In a probabilistic setting, Bayes’s theory is the proper mathematical background for this identification process. The possibility of being able to compute a conditional expectation turns out to be crucial for this purpose. We show how this theoretical background can be used in an actual numerical procedure, and shortly discuss various numerical approximations.

  14. Parameter estimation via conditional expectation: a Bayesian inversion

    KAUST Repository

    Matthies, Hermann G.

    2016-08-11

    When a mathematical or computational model is used to analyse some system, it is usual that some parameters resp. functions or fields in the model are not known, and hence uncertain. These parametric quantities are then identified by actual observations of the response of the real system. In a probabilistic setting, Bayes’s theory is the proper mathematical background for this identification process. The possibility of being able to compute a conditional expectation turns out to be crucial for this purpose. We show how this theoretical background can be used in an actual numerical procedure, and shortly discuss various numerical approximations.

  15. Modified Moment, Maximum Likelihood and Percentile Estimators for the Parameters of the Power Function Distribution

    Directory of Open Access Journals (Sweden)

    Azam Zaka

    2014-10-01

    Full Text Available This paper is concerned with the modifications of maximum likelihood, moments and percentile estimators of the two parameter Power function distribution. Sampling behavior of the estimators is indicated by Monte Carlo simulation. For some combinations of parameter values, some of the modified estimators appear better than the traditional maximum likelihood, moments and percentile estimators with respect to bias, mean square error and total deviation.

  16. Complex mode indication function and its applications to spatial domain parameter estimation

    Science.gov (United States)

    Shih, C. Y.; Tsuei, Y. G.; Allemang, R. J.; Brown, D. L.

    1988-10-01

    This paper introduces the concept of the Complex Mode Indication Function (CMIF) and its application in spatial domain parameter estimation. The concept of CMIF is developed by performing singular value decomposition (SVD) of the Frequency Response Function (FRF) matrix at each spectral line. The CMIF is defined as the eigenvalues, which are the square of the singular values, solved from the normal matrix formed from the FRF matrix, [ H( jω)] H[ H( jω)], at each spectral line. The CMIF appears to be a simple and efficient method for identifying the modes of the complex system. The CMIF identifies modes by showing the physical magnitude of each mode and the damped natural frequency for each root. Since multiple reference data is applied in CMIF, repeated roots can be detected. The CMIF also gives global modal parameters, such as damped natural frequencies, mode shapes and modal participation vectors. Since CMIF works in the spatial domain, uneven frequency spacing data such as data from spatial sine testing can be used. A second-stage procedure for accurate damped natural frequency and damping estimation as well as mode shape scaling is also discussed in this paper.

  17. Parameters influencing deposit estimation when using water sensitive papers

    Directory of Open Access Journals (Sweden)

    Emanuele Cerruto

    2013-10-01

    Full Text Available The aim of the study was to assess the possibility of using water sensitive papers (WSP to estimate the amount of deposit on the target when varying the spray characteristics. To identify the main quantities influencing the deposit, some simplifying hypotheses were applied to simulate WSP behaviour: log-normal distribution of the diameters of the drops and circular stains randomly placed on the images. A very large number (4704 of images of WSPs were produced by means of simulation. The images were obtained by simulating drops of different arithmetic mean diameter (40-300 μm, different coefficient of variation (0.1-1.5, and different percentage of covered surface (2-100%, not considering overlaps. These images were considered to be effective WSP images and then analysed using image processing software in order to measure the percentage of covered surface, the number of particles, and the area of each particle; the deposit was then calculated. These data were correlated with those used to produce the images, varying the spray characteristics. As far as the drop populations are concerned, a classification based on the volume median diameter only should be avoided, especially in case of high variability. This, in fact, results in classifying sprays with very low arithmetic mean diameter as extremely or ultra coarse. The WSP image analysis shows that the relation between simulated and computed percentage of covered surface is independent of the type of spray, whereas impact density and unitary deposit can be estimated from the computed percentage of covered surface only if the spray characteristics (arithmetic mean and coefficient of variation of the drop diameters are known. These data can be estimated by analysing the particles on the WSP images. The results of a validation test show good agreement between simulated and computed deposits, testified by a high (0.93 coefficient of determination.

  18. Estimation of atomic interaction parameters by quantum measurements

    DEFF Research Database (Denmark)

    Kiilerich, Alexander Holm; Mølmer, Klaus

    Quantum systems, ranging from atomic systems to field modes and mechanical devices are useful precision probes for a variety of physical properties and phenomena. Measurements by which we extract information about the evolution of single quantum systems yield random results and cause a back actio...... strategies, we address the Fisher information and the Cramér-Rao sensitivity bound. We investigate monitoring by photon counting, homodyne detection and frequent projective measurements respectively, and exemplify by Rabi frequency estimation in a driven two-level system....

  19. Estimation of common cause failure parameters for diesel generators

    International Nuclear Information System (INIS)

    Tirira, J.; Lanore, J.M.

    2002-10-01

    This paper presents a summary of some results concerning the feedback analysis of French Emergency diesel generator (EDG). The database of common cause failure for EDG has been updated. The data collected covers a period of 10 years. Several latent common cause failure (CCF) events counting in tens are identified. In fact, in this number of events collected, most are potential CCF. From events identified, 15% events are characterized as complete CCF. The database is organised following the structure proposed by 'International Common Cause Data Exchange' (ICDE project). Events collected are analyzed by failure mode and degree of failure. Qualitative analysis of root causes, coupling factors and corrective actions are studied. The exercise of quantitative analysis is in progress for evaluating CCF parameters taking into account the average impact vector and the rate of the independent failures. The interest of the average impact vector approach is that it makes it possible to take into account a wide experience feedback, not limited to complete CCF but including also many events related to partial or potential CCF. It has to be noted that there are no finalized quantitative conclusions yet to be drawn and analysis is in progress for evaluating diesel CCF parameters. In fact, the numerical coding CCF representation of the events uses a part of subjective analysis, which requests a complete and detailed event examination. (authors)

  20. Catalytic hydrolysis of ammonia borane: Intrinsic parameter estimation and validation

    Energy Technology Data Exchange (ETDEWEB)

    Basu, S.; Gore, J.P. [School of Mechanical Engineering, Purdue University, West Lafayette, IN 47907-2088 (United States); School of Chemical Engineering, Purdue University, West Lafayette, IN 47907-2100 (United States); Energy Center in Discovery Park, Purdue University, West Lafayette, IN 47907-2022 (United States); Zheng, Y. [School of Mechanical Engineering, Purdue University, West Lafayette, IN 47907-2088 (United States); Energy Center in Discovery Park, Purdue University, West Lafayette, IN 47907-2022 (United States); Varma, A.; Delgass, W.N. [School of Chemical Engineering, Purdue University, West Lafayette, IN 47907-2100 (United States); Energy Center in Discovery Park, Purdue University, West Lafayette, IN 47907-2022 (United States)

    2010-04-02

    Ammonia borane (AB) hydrolysis is a potential process for on-board hydrogen generation. This paper presents isothermal hydrogen release rate measurements of dilute AB (1 wt%) hydrolysis in the presence of carbon supported ruthenium catalyst (Ru/C). The ranges of investigated catalyst particle sizes and temperature were 20-181 {mu}m and 26-56 C, respectively. The obtained rate data included both kinetic and diffusion-controlled regimes, where the latter was evaluated using the catalyst effectiveness approach. A Langmuir-Hinshelwood kinetic model was adopted to interpret the data, with intrinsic kinetic and diffusion parameters determined by a nonlinear fitting algorithm. The AB hydrolysis was found to have an activation energy 60.4 kJ mol{sup -1}, pre-exponential factor 1.36 x 10{sup 10} mol (kg-cat){sup -1} s{sup -1}, adsorption energy -32.5 kJ mol{sup -1}, and effective mass diffusion coefficient 2 x 10{sup -10} m{sup 2} s{sup -1}. These parameters, obtained under dilute AB conditions, were validated by comparing measurements with simulations of AB consumption rates during the hydrolysis of concentrated AB solutions (5-20 wt%), and also with the axial temperature distribution in a 0.5 kW continuous-flow packed-bed reactor. (author)

  1. Estimating Stellar Parameters and Interstellar Extinction from Evolutionary Tracks

    Directory of Open Access Journals (Sweden)

    Sichevsky S.

    2016-03-01

    Full Text Available Developing methods for analyzing and extracting information from modern sky surveys is a challenging task in astrophysical studies. We study possibilities of parameterizing stars and interstellar medium from multicolor photometry performed in three modern photometric surveys: GALEX, SDSS, and 2MASS. For this purpose, we have developed a method to estimate stellar radius from effective temperature and gravity with the help of evolutionary tracks and model stellar atmospheres. In accordance with the evolution rate at every point of the evolutionary track, star formation rate, and initial mass function, a weight is assigned to the resulting value of radius that allows us to estimate the radius more accurately. The method is verified for the most populated areas of the Hertzsprung-Russell diagram: main-sequence stars and red giants, and it was found to be rather precise (for main-sequence stars, the average relative error of radius and its standard deviation are 0.03% and 3.87%, respectively.

  2. A practical approach to parameter estimation applied to model predicting heart rate regulation

    DEFF Research Database (Denmark)

    Olufsen, Mette; Ottesen, Johnny T.

    2013-01-01

    Mathematical models have long been used for prediction of dynamics in biological systems. Recently, several efforts have been made to render these models patient specific. One way to do so is to employ techniques to estimate parameters that enable model based prediction of observed quantities....... Knowledge of variation in parameters within and between groups of subjects have potential to provide insight into biological function. Often it is not possible to estimate all parameters in a given model, in particular if the model is complex and the data is sparse. However, it may be possible to estimate...... a subset of model parameters reducing the complexity of the problem. In this study, we compare three methods that allow identification of parameter subsets that can be estimated given a model and a set of data. These methods will be used to estimate patient specific parameters in a model predicting...

  3. Estimations of parameters in Pareto reliability model in the presence of masked data

    International Nuclear Information System (INIS)

    Sarhan, Ammar M.

    2003-01-01

    Estimations of parameters included in the individual distributions of the life times of system components in a series system are considered in this paper based on masked system life test data. We consider a series system of two independent components each has a Pareto distributed lifetime. The maximum likelihood and Bayes estimators for the parameters and the values of the reliability of the system's components at a specific time are obtained. Symmetrical triangular prior distributions are assumed for the unknown parameters to be estimated in obtaining the Bayes estimators of these parameters. Large simulation studies are done in order: (i) explain how one can utilize the theoretical results obtained; (ii) compare the maximum likelihood and Bayes estimates obtained of the underlying parameters; and (iii) study the influence of the masking level and the sample size on the accuracy of the estimates obtained

  4. Improved Differential Evolution Algorithm for Parameter Estimation to Improve the Production of Biochemical Pathway

    Directory of Open Access Journals (Sweden)

    Chuii Khim Chong

    2012-06-01

    Full Text Available This paper introduces an improved Differential Evolution algorithm (IDE which aims at improving its performance in estimating the relevant parameters for metabolic pathway data to simulate glycolysis pathway for yeast. Metabolic pathway data are expected to be of significant help in the development of efficient tools in kinetic modeling and parameter estimation platforms. Many computation algorithms face obstacles due to the noisy data and difficulty of the system in estimating myriad of parameters, and require longer computational time to estimate the relevant parameters. The proposed algorithm (IDE in this paper is a hybrid of a Differential Evolution algorithm (DE and a Kalman Filter (KF. The outcome of IDE is proven to be superior than Genetic Algorithm (GA and DE. The results of IDE from experiments show estimated optimal kinetic parameters values, shorter computation time and increased accuracy for simulated results compared with other estimation algorithms

  5. Data adaptive control parameter estimation for scaling laws

    Energy Technology Data Exchange (ETDEWEB)

    Dinklage, Andreas [Max-Planck-Institut fuer Plasmaphysik, Teilinstitut Greifswald, Wendelsteinstrasse 1, D-17491 Greifswald (Germany); Dose, Volker [Max-Planck- Institut fuer Plasmaphysik, Boltzmannstrasse 2, D-85748 Garching (Germany)

    2007-07-01

    Bayesian experimental design quantifies the utility of data expressed by the information gain. Data adaptive exploration determines the expected utility of a single new measurement using existing data and a data descriptive model. In other words, the method can be used for experimental planning. As an example for a multivariate linear case, we apply this method for constituting scaling laws of fusion devices. In detail, the scaling of the stellarator W7-AS is examined for a subset of {iota}=1/3 data. The impact of the existing data on the scaling exponents is presented. Furthermore, in control parameter space regions of high utility are identified which improve the accuracy of the scaling law. This approach is not restricted to the presented example only, but can also be extended to non-linear models.

  6. Improving Distribution Resiliency with Microgrids and State and Parameter Estimation

    Energy Technology Data Exchange (ETDEWEB)

    Tuffner, Francis K. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Williams, Tess L. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Schneider, Kevin P. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Elizondo, Marcelo A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Sun, Yannan [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Liu, Chen-Ching [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Xu, Yin [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Gourisetti, Sri Nikhil Gup [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-09-30

    Modern society relies on low-cost reliable electrical power, both to maintain industry, as well as provide basic social services to the populace. When major disturbances occur, such as Hurricane Katrina or Hurricane Sandy, the nation’s electrical infrastructure can experience significant outages. To help prevent the spread of these outages, as well as facilitating faster restoration after an outage, various aspects of improving the resiliency of the power system are needed. Two such approaches are breaking the system into smaller microgrid sections, and to have improved insight into the operations to detect failures or mis-operations before they become critical. Breaking the system into smaller sections of microgrid islands, power can be maintained in smaller areas where distribution generation and energy storage resources are still available, but bulk power generation is no longer connected. Additionally, microgrid systems can maintain service to local pockets of customers when there has been extensive damage to the local distribution system. However, microgrids are grid connected a majority of the time and implementing and operating a microgrid is much different than when islanded. This report discusses work conducted by the Pacific Northwest National Laboratory that developed improvements for simulation tools to capture the characteristics of microgrids and how they can be used to develop new operational strategies. These operational strategies reduce the cost of microgrid operation and increase the reliability and resilience of the nation’s electricity infrastructure. In addition to the ability to break the system into microgrids, improved observability into the state of the distribution grid can make the power system more resilient. State estimation on the transmission system already provides great insight into grid operations and detecting abnormal conditions by leveraging existing measurements. These transmission-level approaches are expanded to using

  7. Simultaneous Parameters Identifiability and Estimation of an E. coli Metabolic Network Model

    Directory of Open Access Journals (Sweden)

    Kese Pontes Freitas Alberton

    2015-01-01

    Full Text Available This work proposes a procedure for simultaneous parameters identifiability and estimation in metabolic networks in order to overcome difficulties associated with lack of experimental data and large number of parameters, a common scenario in the modeling of such systems. As case study, the complex real problem of parameters identifiability of the Escherichia coli K-12 W3110 dynamic model was investigated, composed by 18 differential ordinary equations and 35 kinetic rates, containing 125 parameters. With the procedure, model fit was improved for most of the measured metabolites, achieving 58 parameters estimated, including 5 unknown initial conditions. The results indicate that simultaneous parameters identifiability and estimation approach in metabolic networks is appealing, since model fit to the most of measured metabolites was possible even when important measures of intracellular metabolites and good initial estimates of parameters are not available.

  8. Probabilistic images (PBIS): A concise image representation technique for multiple parameters

    International Nuclear Information System (INIS)

    Wu, L.C.; Yeh, S.H.; Chen, Z.; Liu, R.S.

    1984-01-01

    Based on m parametric images (PIs) derived from a dynamic series (DS), each pixel of DS is regarded as an m-dimensional vector. Given one set of normal samples (pixels) N and another of abnormal samples A, probability density functions (pdfs) of both sets are estimated. Any unknown sample is classified into N or A by calculating the probability of its being in the abnormal set using the Bayes' theorem. Instead of estimating the multivariate pdfs, a distance ratio transformation is introduced to map the m-dimensional sample space to one dimensional Euclidean space. Consequently, the image that localizes the regional abnormalities is characterized by the probability of being abnormal. This leads to the new representation scheme of PBIs. Tc-99m HIDA study for detecting intrahepatic lithiasis (IL) was chosen as an example of constructing PBI from 3 parameters derived from DS and such a PBI was compared with those 3 PIs, namely, retention ratio image (RRI), peak time image (TNMAX) and excretion mean transit time image (EMTT). 32 normal subjects and 20 patients with proved IL were collected and analyzed. The resultant sensitivity and specificity of PBI were 97% and 98% respectively. They were superior to those of any of the 3 PIs: RRI (94/97), TMAX (86/88) and EMTT (94/97). Furthermore, the contrast of PBI was much better than that of any other image. This new image formation technique, based on multiple parameters, shows the functional abnormalities in a structural way. Its good contrast makes the interpretation easy. This technique is powerful compared to the existing parametric image method

  9. Computing ordinary least-squares parameter estimates for the National Descriptive Model of Mercury in Fish

    Science.gov (United States)

    Donato, David I.

    2013-01-01

    A specialized technique is used to compute weighted ordinary least-squares (OLS) estimates of the parameters of the National Descriptive Model of Mercury in Fish (NDMMF) in less time using less computer memory than general methods. The characteristics of the NDMMF allow the two products X'X and X'y in the normal equations to be filled out in a second or two of computer time during a single pass through the N data observations. As a result, the matrix X does not have to be stored in computer memory and the computationally expensive matrix multiplications generally required to produce X'X and X'y do not have to be carried out. The normal equations may then be solved to determine the best-fit parameters in the OLS sense. The computational solution based on this specialized technique requires O(8p2+16p) bytes of computer memory for p parameters on a machine with 8-byte double-precision numbers. This publication includes a reference implementation of this technique and a Gaussian-elimination solver in preliminary custom software.

  10. The concurrent multiplicative-additive approach for gauge-radar/satellite multisensor precipitation estimates

    Science.gov (United States)

    Garcia-Pintado, J.; Barberá, G. G.; Erena Arrabal, M.; Castillo, V. M.

    2010-12-01

    Objective analysis schemes (OAS), also called ``succesive correction methods'' or ``observation nudging'', have been proposed for multisensor precipitation estimation combining remote sensing data (meteorological radar or satellite) with data from ground-based raingauge networks. However, opposite to the more complex geostatistical approaches, the OAS techniques for this use are not optimized. On the other hand, geostatistical techniques ideally require, at the least, modelling the covariance from the rain gauge data at every time step evaluated, which commonly cannot be soundly done. Here, we propose a new procedure (concurrent multiplicative-additive objective analysis scheme [CMA-OAS]) for operational rainfall estimation using rain gauges and meteorological radar, which does not require explicit modelling of spatial covariances. On the basis of a concurrent multiplicative-additive (CMA) decomposition of the spatially nonuniform radar bias, within-storm variability of rainfall and fractional coverage of rainfall are taken into account. Thus both spatially nonuniform radar bias, given that rainfall is detected, and bias in radar detection of rainfall are handled. The interpolation procedure of CMA-OAS is built on the OAS, whose purpose is to estimate a filtered spatial field of the variable of interest through a successive correction of residuals resulting from a Gaussian kernel smoother applied on spatial samples. The CMA-OAS, first, poses an optimization problem at each gauge-radar support point to obtain both a local multiplicative-additive radar bias decomposition and a regionalization parameter. Second, local biases and regionalization parameters are integrated into an OAS to estimate the multisensor rainfall at the ground level. The approach considers radar estimates as background a priori information (first guess), so that nudging to observations (gauges) may be relaxed smoothly to the first guess, and the relaxation shape is obtained from the sequential

  11. ESTIMATION OF HUMAN BODY SHAPE PARAMETERS USING MICROSOFT KINECTSENCOR

    Directory of Open Access Journals (Sweden)

    D. M. Vasilkov

    2017-01-01

    Full Text Available In the paper a human body shape estimation technology based on scan data acquired from sensor controller Microsoft Kinect is described. This device includes an RGB camera and a depth sensor that provides, for each pixel of the image,a distance from the camera focus to the object. A scan session produces a triangulated high-density surface noised with oscillations, isolated fragments and holes. When scanning a human, additional noise comes from garment folds and wrinkles. An algorithm of creating a sparse and regular 3D human body model (avatar free of these defects, which approximates shape, posture and basic metrics of the scanned body is proposed. This solution finds application in individual clothing industry and computer games, as well.

  12. A hybrid optimization approach to the estimation of distributed parameters in two-dimensional confined aquifers

    Science.gov (United States)

    Heidari, M.; Ranjithan, S.R.

    1998-01-01

    In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is experimentally demonstrated that only one piece of prior information of the least sensitive parameter is sufficient to arrive at the global or near-global optimum solution. For hydraulic head data with measurement errors, the error in the estimation of parameters increases as the standard deviation of the errors increases. Results from our experiments show that, in general, the accuracy of the estimated parameters depends on the level of noise in the hydraulic head data and the initial values used in the truncated-Newton search technique.In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is

  13. Channel Parameter Estimation for Scatter Cluster Model Using Modified MUSIC Algorithm

    Directory of Open Access Journals (Sweden)

    Jinsheng Yang

    2012-01-01

    Full Text Available Recently, the scatter cluster models which precisely evaluate the performance of the wireless communication system have been proposed in the literature. However, the conventional SAGE algorithm does not work for these scatter cluster-based models because it performs poorly when the transmit signals are highly correlated. In this paper, we estimate the time of arrival (TOA, the direction of arrival (DOA, and Doppler frequency for scatter cluster model by the modified multiple signal classification (MUSIC algorithm. Using the space-time characteristics of the multiray channel, the proposed algorithm combines the temporal filtering techniques and the spatial smoothing techniques to isolate and estimate the incoming rays. The simulation results indicated that the proposed algorithm has lower complexity and is less time-consuming in the dense multipath environment than SAGE algorithm. Furthermore, the estimations’ performance increases with elements of receive array and samples length. Thus, the problem of the channel parameter estimation of the scatter cluster model can be effectively addressed with the proposed modified MUSIC algorithm.

  14. Compressive Parameter Estimation for Sparse Translation-Invariant Signals Using Polar Interpolation

    DEFF Research Database (Denmark)

    Fyhn, Karsten; Duarte, Marco F.; Jensen, Søren Holdt

    2015-01-01

    We propose new compressive parameter estimation algorithms that make use of polar interpolation to improve the estimator precision. Our work extends previous approaches involving polar interpolation for compressive parameter estimation in two aspects: (i) we extend the formulation from real non...... to attain good estimation precision and keep the computational complexity low. Our numerical experiments show that the proposed algorithms outperform existing approaches that either leverage polynomial interpolation or are based on a conversion to a frequency-estimation problem followed by a super...... interpolation increases the estimation precision....

  15. Mathematical properties and parameter estimation for transit compartment pharmacodynamic models.

    Science.gov (United States)

    Yates, James W T

    2008-07-03

    One feature of recent research in pharmacodynamic modelling has been the move towards more mechanistically based model structures. However, in all of these models there are common sub-systems, such as feedback loops and time-delays, whose properties and contribution to the model behaviour merit some mathematical analysis. In this paper a common pharmacodynamic model sub-structure is considered: the linear transit compartment. These models have a number of interesting properties as the length of the cascade chain is increased. In the limiting case a pure time-delay is achieved [Milsum, J.H., 1966. Biological Control Systems Analysis. McGraw-Hill Book Company, New York] and the initial behaviour becoming increasingly sensitive to parameter value perturbation. It is also shown that the modelled drug effect is attenuated, though the duration of action is longer. Through this analysis the range of behaviours that such models are capable of reproducing are characterised. The properties of these models and the experimental requirements are discussed in order to highlight how mathematical analysis prior to experimentation can enhance the utility of mathematical modelling.

  16. Estimation of cauliflower mass transfer parameters during convective drying

    Science.gov (United States)

    Sahin, Medine; Doymaz, İbrahim

    2017-02-01

    The study was conducted to evaluate the effect of pre-treatments such as citric acid and hot water blanching and air temperature on drying and rehydration characteristics of cauliflower slices. Experiments were carried out at four different drying air temperatures of 50, 60, 70 and 80 °C with the air velocity of 2.0 m/s. It was observed that drying and rehydration characteristics of cauliflower slices were greatly influenced by air temperature and pre-treatment. Six commonly used mathematical models were evaluated to predict the drying kinetics of cauliflower slices. The Midilli et al. model described the drying behaviour of cauliflower slices at all temperatures better than other models. The values of effective moisture diffusivities ( D eff ) were determined using Fick's law of diffusion and were between 4.09 × 10-9 and 1.88 × 10-8 m2/s. Activation energy was estimated by an Arrhenius type equation and was 23.40, 29.09 and 26.39 kJ/mol for citric acid, blanch and control samples, respectively.

  17. Bias-Corrected Estimation of Noncentrality Parameters of Covariance Structure Models

    Science.gov (United States)

    Raykov, Tenko

    2005-01-01

    A bias-corrected estimator of noncentrality parameters of covariance structure models is discussed. The approach represents an application of the bootstrap methodology for purposes of bias correction, and utilizes the relation between average of resample conventional noncentrality parameter estimates and their sample counterpart. The…

  18. The Effect of Error in Item Parameter Estimates on the Test Response Function Method of Linking.

    Science.gov (United States)

    Kaskowitz, Gary S.; De Ayala, R. J.

    2001-01-01

    Studied the effect of item parameter estimation for computation of linking coefficients for the test response function (TRF) linking/equating method. Simulation results showed that linking was more accurate when there was less error in the parameter estimates, and that 15 or 25 common items provided better results than 5 common items under both…

  19. Maximum-likelihood estimation of the hyperbolic parameters from grouped observations

    DEFF Research Database (Denmark)

    Jensen, Jens Ledet

    1988-01-01

    a least-squares problem. The second procedure Hypesti first approaches the maximum-likelihood estimate by iterating in the profile-log likelihood function for the scale parameter. Close to the maximum of the likelihood function, the estimation is brought to an end by iteration, using all four parameters...

  20. On the design of experimental separation processes for maximum accuracy in the estimation of their parameters

    International Nuclear Information System (INIS)

    Volkman, Y.

    1980-07-01

    The optimal design of experimental separation processes for maximum accuracy in the estimation of process parameters is discussed. The sensitivity factor correlates the inaccuracy of the analytical methods with the inaccuracy of the estimation of the enrichment ratio. It is minimized according to the design parameters of the experiment and the characteristics of the analytical method

  1. Application of isotopic information for estimating parameters in Philip infiltration model

    Directory of Open Access Journals (Sweden)

    Tao Wang

    2016-10-01

    Full Text Available Minimizing parameter uncertainty is crucial in the application of hydrologic models. Isotopic information in various hydrologic components of the water cycle can expand our knowledge of the dynamics of water flow in the system, provide additional information for parameter estimation, and improve parameter identifiability. This study combined the Philip infiltration model with an isotopic mixing model using an isotopic mass balance approach for estimating parameters in the Philip infiltration model. Two approaches to parameter estimation were compared: (a using isotopic information to determine the soil water transmission and then hydrologic information to estimate the soil sorptivity, and (b using hydrologic information to determine the soil water transmission and the soil sorptivity. Results of parameter estimation were verified through a rainfall infiltration experiment in a laboratory under rainfall with constant isotopic compositions and uniform initial soil water content conditions. Experimental results showed that approach (a, using isotopic and hydrologic information, estimated the soil water transmission in the Philip infiltration model in a manner that matched measured values well. The results of parameter estimation of approach (a were better than those of approach (b. It was also found that the analytical precision of hydrogen and oxygen stable isotopes had a significant effect on parameter estimation using isotopic information.

  2. Estimating 3D Object Parameters from 2D Grey-Level Images

    NARCIS (Netherlands)

    Houkes, Z.

    2000-01-01

    This thesis describes a general framework for parameter estimation, which is suitable for computer vision applications. The approach described combines 3D modelling, animation and estimation tools to determine parameters of objects in a scene from 2D grey-level images. The animation tool predicts

  3. Bridging the gaps between non-invasive genetic sampling and population parameter estimation

    Science.gov (United States)

    Francesca Marucco; Luigi Boitani; Daniel H. Pletscher; Michael K. Schwartz

    2011-01-01

    Reliable estimates of population parameters are necessary for effective management and conservation actions. The use of genetic data for capture­recapture (CR) analyses has become an important tool to estimate population parameters for elusive species. Strong emphasis has been placed on the genetic analysis of non-invasive samples, or on the CR analysis; however,...

  4. Leading-Edge Flow Sensing for Aerodynamic Parameter Estimation

    Science.gov (United States)

    Saini, Aditya

    The identification of inflow air data quantities such as airspeed, angle of attack, and local lift coefficient on various sections of a wing or rotor blade provides the capability for load monitoring, aerodynamic diagnostics, and control on devices ranging from air vehicles to wind turbines. Real-time measurement of aerodynamic parameters during flight provides the ability to enhance aircraft operating capabilities while preventing dangerous stall situations. This thesis presents a novel Leading-Edge Flow Sensing (LEFS) algorithm for the determination of the air -data parameters using discrete surface pressures measured at a few ports in the vicinity of the leading edge of a wing or blade section. The approach approximates the leading-edge region of the airfoil as a parabola and uses pressure distribution from the exact potential-ow solution for the parabola to _t the pressures measured from the ports. Pressures sensed at five discrete locations near the leading edge of an airfoil are given as input to the algorithm to solve the model using a simple nonlinear regression. The algorithm directly computes the inflow velocity, the stagnation-point location, section angle of attack and lift coefficient. The performance of the algorithm is assessed using computational and experimental data in the literature for airfoils under different ow conditions. The results show good correlation between the actual and predicted aerodynamic quantities within the pre-stall regime, even for a rotating blade section. Sensing the deviation of the aerodynamic behavior from the linear regime requires additional information on the location of ow separation on the airfoil surface. Bio-inspired artificial hair sensors were explored as a part of the current research for stall detection. The response of such artificial micro-structures can identify critical ow characteristics, which relate directly to the stall behavior. The response of the microfences was recorded via an optical microscope for

  5. ORBSIM- ESTIMATING GEOPHYSICAL MODEL PARAMETERS FROM PLANETARY GRAVITY DATA

    Science.gov (United States)

    Sjogren, W. L.

    1994-01-01

    The ORBSIM program was developed for the accurate extraction of geophysical model parameters from Doppler radio tracking data acquired from orbiting planetary spacecraft. The model of the proposed planetary structure is used in a numerical integration of the spacecraft along simulated trajectories around the primary body. Using line of sight (LOS) Doppler residuals, ORBSIM applies fast and efficient modelling and optimization procedures which avoid the traditional complex dynamic reduction of data. ORBSIM produces quantitative geophysical results such as size, depth, and mass. ORBSIM has been used extensively to investigate topographic features on the Moon, Mars, and Venus. The program has proven particulary suitable for modelling gravitational anomalies and mascons. The basic observable for spacecraft-based gravity data is the Doppler frequency shift of a transponded radio signal. The time derivative of this signal carries information regarding the gravity field acting on the spacecraft in the LOS direction (the LOS direction being the path between the spacecraft and the receiving station, either Earth or another satellite). There are many dynamic factors taken into account: earth rotation, solar radiation, acceleration from planetary bodies, tracking station time and location adjustments, etc. The actual trajectories of the spacecraft are simulated using least squares fitted to conic motion. The theoretical Doppler readings from the simulated orbits are compared to actual Doppler observations and another least squares adjustment is made. ORBSIM has three modes of operation: trajectory simulation, optimization, and gravity modelling. In all cases, an initial gravity model of curved and/or flat disks, harmonics, and/or a force table are required input. ORBSIM is written in FORTRAN 77 for batch execution and has been implemented on a DEC VAX 11/780 computer operating under VMS. This program was released in 1985.

  6. Optical movie encryption based on a discrete multiple-parameter fractional Fourier transform

    International Nuclear Information System (INIS)

    Zhong, Zhi; Zhang, Yujie; Shan, Mingguang; Wang, Ying; Zhang, Yabin; Xie, Hong

    2014-01-01

    A movie encryption scheme is proposed using a discrete multiple-parameter fractional Fourier transform and theta modulation. After being modulated by sinusoidal amplitude grating, each frame of the movie is transformed by a filtering procedure and then multiplexed into a complex signal. The complex signal is multiplied by a pixel scrambling operation and random phase mask, and then encrypted by a discrete multiple-parameter fractional Fourier transform. The movie can be retrieved by using the correct keys, such as a random phase mask, a pixel scrambling operation, the parameters in a discrete multiple-parameter fractional Fourier transform and a time sequence. Numerical simulations have been performed to demonstrate the validity and the security of the proposed method. (paper)

  7. Bayesian estimation of regularization parameters for deformable surface models

    International Nuclear Information System (INIS)

    Cunningham, G.S.; Lehovich, A.; Hanson, K.M.

    1999-01-01

    In this article the authors build on their past attempts to reconstruct a 3D, time-varying bolus of radiotracer from first-pass data obtained by the dynamic SPECT imager, FASTSPECT, built by the University of Arizona. The object imaged is a CardioWest total artificial heart. The bolus is entirely contained in one ventricle and its associated inlet and outlet tubes. The model for the radiotracer distribution at a given time is a closed surface parameterized by 482 vertices that are connected to make 960 triangles, with nonuniform intensity variations of radiotracer allowed inside the surface on a voxel-to-voxel basis. The total curvature of the surface is minimized through the use of a weighted prior in the Bayesian framework, as is the weighted norm of the gradient of the voxellated grid. MAP estimates for the vertices, interior intensity voxels and background count level are produced. The strength of the priors, or hyperparameters, are determined by maximizing the probability of the data given the hyperparameters, called the evidence. The evidence is calculated by first assuming that the posterior is approximately normal in the values of the vertices and voxels, and then by evaluating the integral of the multi-dimensional normal distribution. This integral (which requires evaluating the determinant of a covariance matrix) is computed by applying a recent algorithm from Bai et. al. that calculates the needed determinant efficiently. They demonstrate that the radiotracer is highly inhomogeneous in early time frames, as suspected in earlier reconstruction attempts that assumed a uniform intensity of radiotracer within the closed surface, and that the optimal choice of hyperparameters is substantially different for different time frames

  8. Estimation of power feedback parameters of pulse reactor IBR-2M on transients

    International Nuclear Information System (INIS)

    Pepyolyshev, Yu.N.; Popov, A.K.

    2013-01-01

    Parameters of the IBR-2M reactor power feedback (PFB) on a model of the reactor dynamics by mathematical treatment of two registered transients are estimated. Frequency characteristics and the pulse transient characteristics corresponding to these PFB parameters are calculated. PFB parameters received thus can be considered as their express tentative estimation as real measurements in this case occupy no more than 30 minutes. Total PFB is negative at 1 and 2 MW. At the received estimations of PFB parameters in a self-regulation mode it is possible to consider the stability margins of the IBR-2M reactor satisfactory

  9. Nonlinear adaptive control system design with asymptotically stable parameter estimation error

    Science.gov (United States)

    Mishkov, Rumen; Darmonski, Stanislav

    2018-01-01

    The paper presents a new general method for nonlinear adaptive system design with asymptotic stability of the parameter estimation error. The advantages of the approach include asymptotic unknown parameter estimation without persistent excitation and capability to directly control the estimates transient response time. The method proposed modifies the basic parameter estimation dynamics designed via a known nonlinear adaptive control approach. The modification is based on the generalised prediction error, a priori constraints with a hierarchical parameter projection algorithm, and the stable data accumulation concepts. The data accumulation principle is the main tool for achieving asymptotic unknown parameter estimation. It relies on the parametric identifiability system property introduced. Necessary and sufficient conditions for exponential stability of the data accumulation dynamics are derived. The approach is applied in a nonlinear adaptive speed tracking vector control of a three-phase induction motor.

  10. A Parameter Estimation Method for Nonlinear Systems Based on Improved Boundary Chicken Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Shaolong Chen

    2016-01-01

    Full Text Available Parameter estimation is an important problem in nonlinear system modeling and control. Through constructing an appropriate fitness function, parameter estimation of system could be converted to a multidimensional parameter optimization problem. As a novel swarm intelligence algorithm, chicken swarm optimization (CSO has attracted much attention owing to its good global convergence and robustness. In this paper, a method based on improved boundary chicken swarm optimization (IBCSO is proposed for parameter estimation of nonlinear systems, demonstrated and tested by Lorenz system and a coupling motor system. Furthermore, we have analyzed the influence of time series on the estimation accuracy. Computer simulation results show it is feasible and with desirable performance for parameter estimation of nonlinear systems.

  11. Bayesian Estimation of the Scale Parameter of Inverse Weibull Distribution under the Asymmetric Loss Functions

    Directory of Open Access Journals (Sweden)

    Farhad Yahgmaei

    2013-01-01

    Full Text Available This paper proposes different methods of estimating the scale parameter in the inverse Weibull distribution (IWD. Specifically, the maximum likelihood estimator of the scale parameter in IWD is introduced. We then derived the Bayes estimators for the scale parameter in IWD by considering quasi, gamma, and uniform priors distributions under the square error, entropy, and precautionary loss functions. Finally, the different proposed estimators have been compared by the extensive simulation studies in corresponding the mean square errors and the evolution of risk functions.

  12. Joint state and parameter estimation for a class of cascade systems: Application to a hemodynamic model

    KAUST Repository

    Zayane, Chadia

    2014-06-01

    In this paper, we address a special case of state and parameter estimation, where the system can be put on a cascade form allowing to estimate the state components and the set of unknown parameters separately. Inspired by the nonlinear Balloon hemodynamic model for functional Magnetic Resonance Imaging problem, we propose a hierarchical approach. The system is divided into two subsystems in cascade. The state and input are first estimated from a noisy measured signal using an adaptive observer. The obtained input is then used to estimate the parameters of a linear system using the modulating functions method. Some numerical results are presented to illustrate the efficiency of the proposed method.

  13. Parameter and state estimation of experimental chaotic systems using synchronization

    Science.gov (United States)

    Quinn, John C.; Bryant, Paul H.; Creveling, Daniel R.; Klein, Sallee R.; Abarbanel, Henry D. I.

    2009-07-01

    We examine the use of synchronization as a mechanism for extracting parameter and state information from experimental systems. We focus on important aspects of this problem that have received little attention previously and we explore them using experiments and simulations with the chaotic Colpitts oscillator as an example system. We explore the impact of model imperfection on the ability to extract valid information from an experimental system. We compare two optimization methods: an initial value method and a constrained method. Each of these involves coupling the model equations to the experimental data in order to regularize the chaotic motions on the synchronization manifold. We explore both time-dependent and time-independent coupling and discuss the use of periodic impulse coupling. We also examine both optimized and fixed (or manually adjusted) coupling. For the case of an optimized time-dependent coupling function u(t) we find a robust structure which includes sharp peaks and intervals where it is zero. This structure shows a strong correlation with the location in phase space and appears to depend on noise, imperfections of the model, and the Lyapunov direction vectors. For time-independent coupling we find the counterintuitive result that often the optimal rms error in fitting the model to the data initially increases with coupling strength. Comparison of this result with that obtained using simulated data may provide one measure of model imperfection. The constrained method with time-dependent coupling appears to have benefits in synchronizing long data sets with minimal impact, while the initial value method with time-independent coupling tends to be substantially faster, more flexible, and easier to use. We also describe a method of coupling which is useful for sparse experimental data sets. Our use of the Colpitts oscillator allows us to explore in detail the case of a system with one positive Lyapunov exponent. The methods we explored are easily

  14. Unifying parameter estimation and the Deutsch-Jozsa algorithm for continuous variables

    International Nuclear Information System (INIS)

    Zwierz, Marcin; Perez-Delgado, Carlos A.; Kok, Pieter

    2010-01-01

    We reveal a close relationship between quantum metrology and the Deutsch-Jozsa algorithm on continuous-variable quantum systems. We develop a general procedure, characterized by two parameters, that unifies parameter estimation and the Deutsch-Jozsa algorithm. Depending on which parameter we keep constant, the procedure implements either the parameter-estimation protocol or the Deutsch-Jozsa algorithm. The parameter-estimation part of the procedure attains the Heisenberg limit and is therefore optimal. Due to the use of approximate normalizable continuous-variable eigenstates, the Deutsch-Jozsa algorithm is probabilistic. The procedure estimates a value of an unknown parameter and solves the Deutsch-Jozsa problem without the use of any entanglement.

  15. The logic of comparative life history studies for estimating key parameters, with a focus on natural mortality rate

    Science.gov (United States)

    Hoenig, John M; Then, Amy Y.-H.; Babcock, Elizabeth A.; Hall, Norman G.; Hewitt, David A.; Hesp, Sybrand A.

    2016-01-01

    There are a number of key parameters in population dynamics that are difficult to estimate, such as natural mortality rate, intrinsic rate of population growth, and stock-recruitment relationships. Often, these parameters of a stock are, or can be, estimated indirectly on the basis of comparative life history studies. That is, the relationship between a difficult to estimate parameter and life history correlates is examined over a wide variety of species in order to develop predictive equations. The form of these equations may be derived from life history theory or simply be suggested by exploratory data analysis. Similarly, population characteristics such as potential yield can be estimated by making use of a relationship between the population parameter and bio-chemico–physical characteristics of the ecosystem. Surprisingly, little work has been done to evaluate how well these indirect estimators work and, in fact, there is little guidance on how to conduct comparative life history studies and how to evaluate them. We consider five issues arising in such studies: (i) the parameters of interest may be ill-defined idealizations of the real world, (ii) true values of the parameters are not known for any species, (iii) selecting data based on the quality of the estimates can introduce a host of problems, (iv) the estimates that are available for comparison constitute a non-random sample of species from an ill-defined population of species of interest, and (v) the hierarchical nature of the data (e.g. stocks within species within genera within families, etc., with multiple observations at each level) warrants consideration. We discuss how these issues can be handled and how they shape the kinds of questions that can be asked of a database of life history studies.

  16. Estimation of caries experience by multiple imputation and direct standardization

    NARCIS (Netherlands)

    Schuller, A. A.; Van Buuren, S.

    2014-01-01

    Valid estimates of caries experience are needed to monitor oral population health. Obtaining such estimates in practice is often complicated by nonresponse and missing data. The goal of this study was to estimate caries experiences in a population of children aged 5 and 11 years, in the presence of

  17. Estimation of Caries Experience by Multiple Imputation and Direct Standardization

    NARCIS (Netherlands)

    Schuller, A. A.; van Buuren, S.

    2014-01-01

    Valid estimates of caries experience are needed to monitor oral population health. Obtaining such estimates in practice is often complicated by nonresponse and missing data. The goal of this study was to estimate caries experiences in a population of children aged 5 and 11 years, in the presence of

  18. Joint Angle and Frequency Estimation Using Multiple-Delay Output Based on ESPRIT

    Science.gov (United States)

    Xudong, Wang

    2010-12-01

    This paper presents a novel ESPRIT algorithm-based joint angle and frequency estimation using multiple-delay output (MDJAFE). The algorithm can estimate the joint angles and frequencies, since the use of multiple output makes the estimation accuracy greatly improved when compared with a conventional algorithm. The useful behavior of the proposed algorithm is verified by simulations.

  19. Estimation of single plane unbalance parameters of a rotor-bearing system using Kalman filtering based force estimation technique

    Science.gov (United States)

    Shrivastava, Akash; Mohanty, A. R.

    2018-03-01

    This paper proposes a model-based method to estimate single plane unbalance parameters (amplitude and phase angle) in a rotor using Kalman filter and recursive least square based input force estimation technique. Kalman filter based input force estimation technique requires state-space model and response measurements. A modified system equivalent reduction expansion process (SEREP) technique is employed to obtain a reduced-order model of the rotor system so that limited response measurements can be used. The method is demonstrated using numerical simulations on a rotor-disk-bearing system. Results are presented for different measurement sets including displacement, velocity, and rotational response. Effects of measurement noise level, filter parameters (process noise covariance and forgetting factor), and modeling error are also presented and it is observed that the unbalance parameter estimation is robust with respect to measurement noise.

  20. Uncertainty estimation of core safety parameters using cross-correlations of covariance matrix

    International Nuclear Information System (INIS)

    Yamamoto, Akio; Yasue, Yoshihiro; Endo, Tomohiro; Kodama, Yasuhiro; Ohoka, Yasunori; Tatsumi, Masahiro

    2013-01-01

    An uncertainty reduction method for core safety parameters, for which measurement values are not obtained, is proposed. We empirically recognize that there exist some correlations among the prediction errors of core safety parameters, e.g., a correlation between the control rod worth and the assembly relative power at corresponding position. Correlations of errors among core safety parameters are theoretically estimated using the covariance of cross sections and sensitivity coefficients of core parameters. The estimated correlations of errors among core safety parameters are verified through the direct Monte Carlo sampling method. Once the correlation of errors among core safety parameters is known, we can estimate the uncertainty of a safety parameter for which measurement value is not obtained. (author)

  1. Estimating Convection Parameters in the GFDL CM2.1 Model Using Ensemble Data Assimilation

    Science.gov (United States)

    Li, Shan; Zhang, Shaoqing; Liu, Zhengyu; Lu, Lv; Zhu, Jiang; Zhang, Xuefeng; Wu, Xinrong; Zhao, Ming; Vecchi, Gabriel A.; Zhang, Rong-Hua; Lin, Xiaopei

    2018-04-01

    Parametric uncertainty in convection parameterization is one major source of model errors that cause model climate drift. Convection parameter tuning has been widely studied in atmospheric models to help mitigate the problem. However, in a fully coupled general circulation model (CGCM), convection parameters which impact the ocean as well as the climate simulation may have different optimal values. This study explores the possibility of estimating convection parameters with an ensemble coupled data assimilation method in a CGCM. Impacts of the convection parameter estimation on climate analysis and forecast are analyzed. In a twin experiment framework, five convection parameters in the GFDL coupled model CM2.1 are estimated individually and simultaneously under both perfect and imperfect model regimes. Results show that the ensemble data assimilation method can help reduce the bias in convection parameters. With estimated convection parameters, the analyses and forecasts for both the atmosphere and the ocean are generally improved. It is also found that information in low latitudes is relatively more important for estimating convection parameters. This study further suggests that when important parameters in appropriate physical parameterizations are identified, incorporating their estimation into traditional ensemble data assimilation procedure could improve the final analysis and climate prediction.

  2. Parameter estimation in a stochastic model of the tubuloglomerular feedback mechanism in a rat nephron

    DEFF Research Database (Denmark)

    Ditlevsen, Susanne; Yip, Kay-Pong; Holstein-Rathlou, N.-H.

    2005-01-01

    by a variety of influences, which change over time (blood pressure, hormone levels, etc.). To estimate the key parameters of the model we have developed a new estimation method based on the oscillatory behavior of the data. The dynamics is characterized by the spectral density, which has been estimated...

  3. Admissible Estimators in the General Multivariate Linear Model with Respect to Inequality Restricted Parameter Set

    Directory of Open Access Journals (Sweden)

    Shangli Zhang

    2009-01-01

    Full Text Available By using the methods of linear algebra and matrix inequality theory, we obtain the characterization of admissible estimators in the general multivariate linear model with respect to inequality restricted parameter set. In the classes of homogeneous and general linear estimators, the necessary and suffcient conditions that the estimators of regression coeffcient function are admissible are established.

  4. Estimation of the location parameter of distributions with known coefficient of variation by record values

    Directory of Open Access Journals (Sweden)

    N. K. Sajeevkumar

    2014-09-01

    Full Text Available In this article, we derived the Best Linear Unbiased Estimator (BLUE of the location parameter of certain distributions with known coefficient of variation by record values. Efficiency comparisons are also made on the proposed estimator with some of the usual estimators. Finally we give a real life data to explain the utility of results developed in this article.

  5. Improved ESPRIT Method for Joint Direction-of-Arrival and Frequency Estimation Using Multiple-Delay Output

    Directory of Open Access Journals (Sweden)

    Wang Xudong

    2012-01-01

    Full Text Available An automatic pairing joint direction-of-arrival (DOA and frequency estimation is presented to overcome the unsatisfactory performances of estimation of signal parameter via rotational invariance techniques- (ESPRIT- like algorithm of Wang (2010, which requires an additional pairing. By using multiple-delay output of a uniform linear antenna arrays (ULA, the proposed algorithm can estimate joint angles and frequencies with an improved ESPRIT. Compared with Wang’s ESPRIT algorithm, the angle estimation performance of the proposed algorithm is greatly improved. The frequency estimation performance of the proposed algorithm is same with that of Wang’s ESPRIT algorithm. Furthermore, the proposed algorithm can obtain automatic pairing DOA and frequency parameters, and it has a comparative computational complexity in contrast to Wang’s ESPRIT algorithm. By the way, this proposed algorithm can also work well for nonuniform linear arrays. The useful behavior of this proposed algorithm is verified by simulations.

  6. Probabilistic parameter estimation of activated sludge processes using Markov Chain Monte Carlo.

    Science.gov (United States)

    Sharifi, Soroosh; Murthy, Sudhir; Takács, Imre; Massoudieh, Arash

    2014-03-01

    One of the most important challenges in making activated sludge models (ASMs) applicable to design problems is identifying the values of its many stoichiometric and kinetic parameters. When wastewater characteristics data from full-scale biological treatment systems are used for parameter estimation, several sources of uncertainty, including uncertainty in measured data, external forcing (e.g. influent characteristics), and model structural errors influence the value of the estimated parameters. This paper presents a Bayesian hierarchical modeling framework for the probabilistic estimation of activated sludge process parameters. The method provides the joint probability density functions (JPDFs) of stoichiometric and kinetic parameters by updating prior information regarding the parameters obtained from expert knowledge and literature. The method also provides the posterior correlations between the parameters, as well as a measure of sensitivity of the different constituents with respect to the parameters. This information can be used to design experiments to provide higher information content regarding certain parameters. The method is illustrated using the ASM1 model to describe synthetically generated data from a hypothetical biological treatment system. The results indicate that data from full-scale systems can narrow down the ranges of some parameters substantially whereas the amount of information they provide regarding other parameters is small, due to either large correlations between some of the parameters or a lack of sensitivity with respect to the parameters. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. Estimating ambiguity preferences and perceptions in multiple prior models: Evidence from the field

    Science.gov (United States)

    Dimmock, Stephen G.; Kouwenberg, Roy; Mitchell, Olivia S.; Peijnenburg, Kim

    2016-01-01

    We develop a tractable method to estimate multiple prior models of decision-making under ambiguity. In a representative sample of the U.S. population, we measure ambiguity attitudes in the gain and loss domains. We find that ambiguity aversion is common for uncertain events of moderate to high likelihood involving gains, but ambiguity seeking prevails for low likelihoods and for losses. We show that choices made under ambiguity in the gain domain are best explained by the α-MaxMin model, with one parameter measuring ambiguity aversion (ambiguity preferences) and a second parameter quantifying the perceived degree of ambiguity (perceptions about ambiguity). The ambiguity aversion parameter α is constant and prior probability sets are asymmetric for low and high likelihood events. The data reject several other models, such as MaxMin and MaxMax, as well as symmetric probability intervals. Ambiguity aversion and the perceived degree of ambiguity are both higher for men and for the college-educated. Ambiguity aversion (but not perceived ambiguity) is also positively related to risk aversion. In the loss domain, we find evidence of reflection, implying that ambiguity aversion for gains tends to reverse into ambiguity seeking for losses. Our model’s estimates for preferences and perceptions about ambiguity can be used to analyze the economic and financial implications of such preferences. PMID:26924890

  8. Estimating ambiguity preferences and perceptions in multiple prior models: Evidence from the field.

    Science.gov (United States)

    Dimmock, Stephen G; Kouwenberg, Roy; Mitchell, Olivia S; Peijnenburg, Kim

    2015-12-01

    We develop a tractable method to estimate multiple prior models of decision-making under ambiguity. In a representative sample of the U.S. population, we measure ambiguity attitudes in the gain and loss domains. We find that ambiguity aversion is common for uncertain events of moderate to high likelihood involving gains, but ambiguity seeking prevails for low likelihoods and for losses. We show that choices made under ambiguity in the gain domain are best explained by the α-MaxMin model, with one parameter measuring ambiguity aversion (ambiguity preferences) and a second parameter quantifying the perceived degree of ambiguity (perceptions about ambiguity). The ambiguity aversion parameter α is constant and prior probability sets are asymmetric for low and high likelihood events. The data reject several other models, such as MaxMin and MaxMax, as well as symmetric probability intervals. Ambiguity aversion and the perceived degree of ambiguity are both higher for men and for the college-educated. Ambiguity aversion (but not perceived ambiguity) is also positively related to risk aversion. In the loss domain, we find evidence of reflection, implying that ambiguity aversion for gains tends to reverse into ambiguity seeking for losses. Our model's estimates for preferences and perceptions about ambiguity can be used to analyze the economic and financial implications of such preferences.

  9. Joint Multi-Fiber NODDI Parameter Estimation and Tractography using the Unscented Information Filter

    Directory of Open Access Journals (Sweden)

    Yogesh eRathi

    2016-04-01

    Full Text Available Tracing white matter fiber bundles is an integral part of analyzing brain connectivity. An accurate estimate of the underlying tissue parameters is also paramount in several neuroscience applications. In this work, we propose to use a joint fiber model estimation and tractography algorithm that uses the NODDI (neurite orientation dispersion diffusion imaging model to estimate fiber orientation dispersion consistently and smoothly along the fiber tracts along with estimating the intracellular and extracellular volume fractions from the diffusion signal. While the NODDI model has been used in earlier works to estimate the microstructural parameters at each voxel independently, for the first time, we propose to integrate it into a tractography framework. We extend this framework to estimate the NODDI parameters for two crossing fibers, which is imperative to trace fiber bundles through crossings as well as to estimate the microstructural parameters for each fiber bundle separately. We propose to use the unscented information filter (UIF to accurately estimate the model parameters and perform tractography. The proposed approach has significant computational performance improvements as well as numerical robustness over the unscented Kalman filter (UKF. Our method not only estimates the confidence in the estimated parameters via the covariance matrix, but also provides the Fisher-information matrix of the state variables (model parameters, which can be quite useful to measure model complexity. Results from in-vivo human brain data sets demonstrate the ability of our algorithm to trace through crossing fiber regions, while estimating orientation dispersion and other biophysical model parameters in a consistent manner along the tracts.

  10. Short communication: Estimates of genetic parameters for dairy fertility in New Zealand.

    Science.gov (United States)

    Amer, P R; Stachowicz, K; Jenkins, G M; Meier, S

    2016-10-01

    Reproductive performance of dairy cows in a seasonal calving system is especially important as cows are required to achieve a 365-d calving interval. Prior research with a small data set has identified that the genetic evaluation model for fertility could be enhanced by replacing the binary calving rate trait (CR42), which gives the probability of a cow calving within the first 42d since the planned start of calving at second, third, and fourth calving, with a continuous version, calving season day (CSD), including a heifer calving season day trait expressed at first calving, removing milk yield, retaining a probability of mating trait (PM21) which gives the probability of a cow being mated within the first 21d from the planned start of mating, and first lactation body condition score (BCS), and including gestation length (GL). The aim of this study was to estimate genetic parameters for the proposed new model using a larger data set and compare these with parameters used in the current system. Heritability estimates for CSD and PM21 ranged from 0.013 to 0.019 and from 0.031 to 0.058, respectively. For the 2 traits that correspond with the ones used in the current genetic evaluation system (mating trait, PM21 and BCS) genetic correlations were lower in this study compared with previous estimates. Genetic correlations between CSD and PM21 across different parities were also lower than the correlations between CR42 and PM21 reported previously. The genetic correlation between heifer CSD and CSD in first parity was 0.66. Estimates of genetic correlations of BCS with CSD were higher than those with PM21. For GL, direct heritability was estimated to be 0.67, maternal heritability was 0.11, and maternal repeatability was 0.22. Direct GL had moderate to high and favorable genetic correlations with evaluated fertility traits, whereas corresponding residual correlations remain low, which makes GL a useful candidate predictor trait for fertility in a multiple trait

  11. Histogram analysis parameters identify multiple associations between DWI and DCE MRI in head and neck squamous cell carcinoma.

    Science.gov (United States)

    Meyer, Hans Jonas; Leifels, Leonard; Schob, Stefan; Garnov, Nikita; Surov, Alexey

    2018-01-01

    Nowadays, multiparametric investigations of head and neck squamous cell carcinoma (HNSCC) are established. These approaches can better characterize tumor biology and behavior. Diffusion weighted imaging (DWI) can by means of apparent diffusion coefficient (ADC) quantitatively characterize different tissue compartments. Dynamic contrast-enhanced magnetic resonance imaging (DCE MRI) reflects perfusion and vascularization of tissues. Recently, a novel approach of data acquisition, namely histogram analysis of different images is a novel diagnostic approach, which can provide more information of tissue heterogeneity. The purpose of this study was to analyze possible associations between DWI, and DCE parameters derived from histogram analysis in patients with HNSCC. Overall, 34 patients, 9 women and 25 men, mean age, 56.7±10.2years, with different HNSCC were involved in the study. DWI was obtained by using of an axial echo planar imaging sequence with b-values of 0 and 800s/mm 2 . Dynamic T1w DCE sequence after intravenous application of contrast medium was performed for estimation of the following perfusion parameters: volume transfer constant (K trans ), volume of the extravascular extracellular leakage space (Ve), and diffusion of contrast medium from the extravascular extracellular leakage space back to the plasma (Kep). Both ADC and perfusion parameters maps were processed offline in DICOM format with custom-made Matlab-based application. Thereafter, polygonal ROIs were manually drawn on the transferred maps on each slice. For every parameter, mean, maximal, minimal, and median values, as well percentiles 10th, 25th, 75th, 90th, kurtosis, skewness, and entropy were estimated. Сorrelation analysis identified multiple statistically significant correlations between the investigated parameters. Ve related parameters correlated well with different ADC values. Especially, percentiles 10 and 75, mode, and median values showed stronger correlations in comparison to other

  12. Characterization of microcirculation in multiple sclerosis lesions by dynamic texture parameter analysis (DTPA.

    Directory of Open Access Journals (Sweden)

    Rajeev Kumar Verma

    Full Text Available OBJECTIVE: Texture analysis is an alternative method to quantitatively assess MR-images. In this study, we introduce dynamic texture parameter analysis (DTPA, a novel technique to investigate the temporal evolution of texture parameters using dynamic susceptibility contrast enhanced (DSCE imaging. Here, we aim to introduce the method and its application on enhancing lesions (EL, non-enhancing lesions (NEL and normal appearing white matter (NAWM in multiple sclerosis (MS. METHODS: We investigated 18 patients with MS and clinical isolated syndrome (CIS, according to the 2010 McDonald's criteria using DSCE imaging at different field strengths (1.5 and 3 Tesla. Tissues of interest (TOIs were defined within 27 EL, 29 NEL and 37 NAWM areas after normalization and eight histogram-based texture parameter maps (TPMs were computed. TPMs quantify the heterogeneity of the TOI. For every TOI, the average, variance, skewness, kurtosis and variance-of-the-variance statistical parameters were calculated. These TOI parameters were further analyzed using one-way ANOVA followed by multiple Wilcoxon sum rank testing corrected for multiple comparisons. RESULTS: Tissue- and time-dependent differences were observed in the dynamics of computed texture parameters. Sixteen parameters discriminated between EL, NEL and NAWM (pAVG = 0.0005. Significant differences in the DTPA texture maps were found during inflow (52 parameters, outflow (40 parameters and reperfusion (62 parameters. The strongest discriminators among the TPMs were observed in the variance-related parameters, while skewness and kurtosis TPMs were in general less sensitive to detect differences between the tissues. CONCLUSION: DTPA of DSCE image time series revealed characteristic time responses for ELs, NELs and NAWM. This may be further used for a refined quantitative grading of MS lesions during their evolution from acute to chronic state. DTPA discriminates lesions beyond features of enhancement or T2

  13. Power capability evaluation for lithium iron phosphate batteries based on multi-parameter constraints estimation

    Science.gov (United States)

    Wang, Yujie; Pan, Rui; Liu, Chang; Chen, Zonghai; Ling, Qiang

    2018-01-01

    The battery power capability is intimately correlated with the climbing, braking and accelerating performance of the electric vehicles. Accurate power capability prediction can not only guarantee the safety but also regulate driving behavior and optimize battery energy usage. However, the nonlinearity of the battery model is very complex especially for the lithium iron phosphate batteries. Besides, the hysteresis loop in the open-circuit voltage curve is easy to cause large error in model prediction. In this work, a multi-parameter constraints dynamic estimation method is proposed to predict the battery continuous period power capability. A high-fidelity battery model which considers the battery polarization and hysteresis phenomenon is presented to approximate the high nonlinearity of the lithium iron phosphate battery. Explicit analyses of power capability with multiple constraints are elaborated, specifically the state-of-energy is considered in power capability assessment. Furthermore, to solve the problem of nonlinear system state estimation, and suppress noise interference, the UKF based state observer is employed for power capability prediction. The performance of the proposed methodology is demonstrated by experiments under different dynamic characterization schedules. The charge and discharge power capabilities of the lithium iron phosphate batteries are quantitatively assessed under different time scales and temperatures.

  14. Estimation of pharmacokinetic parameters from non-compartmental variables using Microsoft Excel.

    Science.gov (United States)

    Dansirikul, Chantaratsamon; Choi, Malcolm; Duffull, Stephen B

    2005-06-01

    This study was conducted to develop a method, termed 'back analysis (BA)', for converting non-compartmental variables to compartment model dependent pharmacokinetic parameters for both one- and two-compartment models. A Microsoft Excel spreadsheet was implemented with the use of Solver and visual basic functions. The performance of the BA method in estimating pharmacokinetic parameter values was evaluated by comparing the parameter values obtained to a standard modelling software program, NONMEM, using simulated data. The results show that the BA method was reasonably precise and provided low bias in estimating fixed and random effect parameters for both one- and two-compartment models. The pharmacokinetic parameters estimated from the BA method were similar to those of NONMEM estimation.

  15. A Novel Methodology for Estimating State-Of-Charge of Li-Ion Batteries Using Advanced Parameters Estimation

    Directory of Open Access Journals (Sweden)

    Ibrahim M. Safwat

    2017-11-01

    Full Text Available State-of-charge (SOC estimations of Li-ion batteries have been the focus of many research studies in previous years. Many articles discussed the dynamic model’s parameters estimation of the Li-ion battery, where the fixed forgetting factor recursive least square estimation methodology is employed. However, the change rate of each parameter to reach the true value is not taken into consideration, which may tend to poor estimation. This article discusses this issue, and proposes two solutions to solve it. The first solution is the usage of a variable forgetting factor instead of a fixed one, while the second solution is defining a vector of forgetting factors, which means one factor for each parameter. After parameters estimation, a new idea is proposed to estimate state-of-charge (SOC of the Li-ion battery based on Newton’s method. Also, the error percentage and computational cost are discussed and compared with that of nonlinear Kalman filters. This methodology is applied on a 36 V 30 A Li-ion pack to validate this idea.

  16. Availability of thermodynamic system with multiple performance parameters based on vector-universal generating function

    International Nuclear Information System (INIS)

    Cai Qi; Shang Yanlong; Chen Lisheng; Zhao Yuguang

    2013-01-01

    Vector-universal generating function was presented to analyze the availability of thermodynamic system with multiple performance parameters. Vector-universal generating function of component's performance was defined, the arithmetic model based on vector-universal generating function was derived for the thermodynamic system, and the calculation method was given for state probability of multi-state component. With the stochastic simulation of the degeneration trend of the multiple factors, the system availability with multiple performance parameters was obtained under composite factors. It is shown by an example that the results of the availability obtained by the binary availability analysis method are somewhat conservative, and the results considering parameter failure based on vector-universal generating function reflect the operation characteristics of the thermodynamic system better. (authors)

  17. Ultrasound scatter in heterogeneous 3D microstructures: Parameters affecting multiple scattering

    Science.gov (United States)

    Engle, B. J.; Roberts, R. A.; Grandin, R. J.

    2018-04-01

    This paper reports on a computational study of ultrasound propagation in heterogeneous metal microstructures. Random spatial fluctuations in elastic properties over a range of length scales relative to ultrasound wavelength can give rise to scatter-induced attenuation, backscatter noise, and phase front aberration. It is of interest to quantify the dependence of these phenomena on the microstructure parameters, for the purpose of quantifying deleterious consequences on flaw detectability, and for the purpose of material characterization. Valuable tools for estimation of microstructure parameters (e.g. grain size) through analysis of ultrasound backscatter have been developed based on approximate weak-scattering models. While useful, it is understood that these tools display inherent inaccuracy when multiple scattering phenomena significantly contribute to the measurement. It is the goal of this work to supplement weak scattering model predictions with corrections derived through application of an exact computational scattering model to explicitly prescribed microstructures. The scattering problem is formulated as a volume integral equation (VIE) displaying a convolutional Green-function-derived kernel. The VIE is solved iteratively employing FFT-based con-volution. Realizations of random microstructures are specified on the micron scale using statistical property descriptions (e.g. grain size and orientation distributions), which are then spatially filtered to provide rigorously equivalent scattering media on a length scale relevant to ultrasound propagation. Scattering responses from ensembles of media representations are averaged to obtain mean and variance of quantities such as attenuation and backscatter noise levels, as a function of microstructure descriptors. The computational approach will be summarized, and examples of application will be presented.

  18. Estimation of G-renewal process parameters as an ill-posed inverse problem

    International Nuclear Information System (INIS)

    Krivtsov, V.; Yevkin, O.

    2013-01-01

    Statistical estimation of G-renewal process parameters is an important estimation problem, which has been considered by many authors. We view this problem from the standpoint of a mathematically ill-posed, inverse problem (the solution is not unique and/or is sensitive to statistical error) and propose a regularization approach specifically suited to the G-renewal process. Regardless of the estimation method, the respective objective function usually involves parameters of the underlying life-time distribution and simultaneously the restoration parameter. In this paper, we propose to regularize the problem by decoupling the estimation of the aforementioned parameters. Using a simulation study, we show that the resulting estimation/extrapolation accuracy of the proposed method is considerably higher than that of the existing methods

  19. Parameter estimation of Lorenz chaotic system using a hybrid swarm intelligence algorithm

    International Nuclear Information System (INIS)

    Lazzús, Juan A.; Rivera, Marco; López-Caraballo, Carlos H.

    2016-01-01

    A novel hybrid swarm intelligence algorithm for chaotic system parameter estimation is present. For this purpose, the parameters estimation on Lorenz systems is formulated as a multidimensional problem, and a hybrid approach based on particle swarm optimization with ant colony optimization (PSO–ACO) is implemented to solve this problem. Firstly, the performance of the proposed PSO–ACO algorithm is tested on a set of three representative benchmark functions, and the impact of the parameter settings on PSO–ACO efficiency is studied. Secondly, the parameter estimation is converted into an optimization problem on a three-dimensional Lorenz system. Numerical simulations on Lorenz model and comparisons with results obtained by other algorithms showed that PSO–ACO is a very powerful tool for parameter estimation with high accuracy and low deviations. - Highlights: • PSO–ACO combined particle swarm optimization with ant colony optimization. • This study is the first research of PSO–ACO to estimate parameters of chaotic systems. • PSO–ACO algorithm can identify the parameters of the three-dimensional Lorenz system with low deviations. • PSO–ACO is a very powerful tool for the parameter estimation on other chaotic system.

  20. Parameter estimation of Lorenz chaotic system using a hybrid swarm intelligence algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Lazzús, Juan A., E-mail: jlazzus@dfuls.cl; Rivera, Marco; López-Caraballo, Carlos H.

    2016-03-11

    A novel hybrid swarm intelligence algorithm for chaotic system parameter estimation is present. For this purpose, the parameters estimation on Lorenz systems is formulated as a multidimensional problem, and a hybrid approach based on particle swarm optimization with ant colony optimization (PSO–ACO) is implemented to solve this problem. Firstly, the performance of the proposed PSO–ACO algorithm is tested on a set of three representative benchmark functions, and the impact of the parameter settings on PSO–ACO efficiency is studied. Secondly, the parameter estimation is converted into an optimization problem on a three-dimensional Lorenz system. Numerical simulations on Lorenz model and comparisons with results obtained by other algorithms showed that PSO–ACO is a very powerful tool for parameter estimation with high accuracy and low deviations. - Highlights: • PSO–ACO combined particle swarm optimization with ant colony optimization. • This study is the first research of PSO–ACO to estimate parameters of chaotic systems. • PSO–ACO algorithm can identify the parameters of the three-dimensional Lorenz system with low deviations. • PSO–ACO is a very powerful tool for the parameter estimation on other chaotic system.