WorldWideScience

Sample records for methodology parameter estimation

  1. A robust methodology for kinetic model parameter estimation for biocatalytic reactions

    DEFF Research Database (Denmark)

    Al-Haque, Naweed; Andrade Santacoloma, Paloma de Gracia; Lima Afonso Neto, Watson

    2012-01-01

    lead to globally optimized parameter values. In this article, a robust methodology to estimate parameters for biocatalytic reaction kinetic expressions is proposed. The methodology determines the parameters in a systematic manner by exploiting the best features of several of the current approaches...... parameters, which are strongly correlated with each other. State-of-the-art methodologies such as nonlinear regression (using progress curves) or graphical analysis (using initial rate data, for example, the Lineweaver-Burke plot, Hanes plot or Dixon plot) often incorporate errors in the estimates and rarely...

  2. Methodology to estimate parameters of an excitation system based on experimental conditions

    Energy Technology Data Exchange (ETDEWEB)

    Saavedra-Montes, A.J. [Carrera 80 No 65-223, Bloque M8 oficina 113, Escuela de Mecatronica, Universidad Nacional de Colombia, Medellin (Colombia); Calle 13 No 100-00, Escuela de Ingenieria Electrica y Electronica, Universidad del Valle, Cali, Valle (Colombia); Ramirez-Scarpetta, J.M. [Calle 13 No 100-00, Escuela de Ingenieria Electrica y Electronica, Universidad del Valle, Cali, Valle (Colombia); Malik, O.P. [2500 University Drive N.W., Electrical and Computer Engineering Department, University of Calgary, Calgary, Alberta (Canada)

    2011-01-15

    A methodology to estimate the parameters of a potential-source controlled rectifier excitation system model is presented in this paper. The proposed parameter estimation methodology is based on the characteristics of the excitation system. A comparison of two pseudo random binary signals, two sampling periods for each one, and three estimation algorithms is also presented. Simulation results from an excitation control system model and experimental results from an excitation system of a power laboratory setup are obtained. To apply the proposed methodology, the excitation system parameters are identified at two different levels of the generator saturation curve. The results show that it is possible to estimate the parameters of the standard model of an excitation system, recording two signals and the system operating in closed loop with the generator. The normalized sum of squared error obtained with experimental data is below 10%, and with simulation data is below 5%. (author)

  3. A Novel Methodology for Estimating State-Of-Charge of Li-Ion Batteries Using Advanced Parameters Estimation

    Directory of Open Access Journals (Sweden)

    Ibrahim M. Safwat

    2017-11-01

    Full Text Available State-of-charge (SOC estimations of Li-ion batteries have been the focus of many research studies in previous years. Many articles discussed the dynamic model’s parameters estimation of the Li-ion battery, where the fixed forgetting factor recursive least square estimation methodology is employed. However, the change rate of each parameter to reach the true value is not taken into consideration, which may tend to poor estimation. This article discusses this issue, and proposes two solutions to solve it. The first solution is the usage of a variable forgetting factor instead of a fixed one, while the second solution is defining a vector of forgetting factors, which means one factor for each parameter. After parameters estimation, a new idea is proposed to estimate state-of-charge (SOC of the Li-ion battery based on Newton’s method. Also, the error percentage and computational cost are discussed and compared with that of nonlinear Kalman filters. This methodology is applied on a 36 V 30 A Li-ion pack to validate this idea.

  4. A Consistent Methodology Based Parameter Estimation for a Lactic Acid Bacteria Fermentation Model

    DEFF Research Database (Denmark)

    Spann, Robert; Roca, Christophe; Kold, David

    2017-01-01

    Lactic acid bacteria are used in many industrial applications, e.g. as starter cultures in the dairy industry or as probiotics, and research on their cell production is highly required. A first principles kinetic model was developed to describe and understand the biological, physical, and chemical...... mechanisms in a lactic acid bacteria fermentation. We present here a consistent approach for a methodology based parameter estimation for a lactic acid fermentation. In the beginning, just an initial knowledge based guess of parameters was available and an initial parameter estimation of the complete set...... of parameters was performed in order to get a good model fit to the data. However, not all parameters are identifiable with the given data set and model structure. Sensitivity, identifiability, and uncertainty analysis were completed and a relevant identifiable subset of parameters was determined for a new...

  5. A robust methodology for modal parameters estimation applied to SHM

    Science.gov (United States)

    Cardoso, Rharã; Cury, Alexandre; Barbosa, Flávio

    2017-10-01

    The subject of structural health monitoring is drawing more and more attention over the last years. Many vibration-based techniques aiming at detecting small structural changes or even damage have been developed or enhanced through successive researches. Lately, several studies have focused on the use of raw dynamic data to assess information about structural condition. Despite this trend and much skepticism, many methods still rely on the use of modal parameters as fundamental data for damage detection. Therefore, it is of utmost importance that modal identification procedures are performed with a sufficient level of precision and automation. To fulfill these requirements, this paper presents a novel automated time-domain methodology to identify modal parameters based on a two-step clustering analysis. The first step consists in clustering modes estimates from parametric models of different orders, usually presented in stabilization diagrams. In an automated manner, the first clustering analysis indicates which estimates correspond to physical modes. To circumvent the detection of spurious modes or the loss of physical ones, a second clustering step is then performed. The second step consists in the data mining of information gathered from the first step. To attest the robustness and efficiency of the proposed methodology, numerically generated signals as well as experimental data obtained from a simply supported beam tested in laboratory and from a railway bridge are utilized. The results appeared to be more robust and accurate comparing to those obtained from methods based on one-step clustering analysis.

  6. Methodology for Evaluating Safety System Operability using Virtual Parameter Network

    International Nuclear Information System (INIS)

    Park, Sukyoung; Heo, Gyunyoung; Kim, Jung Taek; Kim, Tae Wan

    2014-01-01

    KAERI (Korea Atomic Energy Research Institute) and UTK (University of Tennessee Knoxville) are working on the I-NERI project to suggest complement of this problem. This research propose the methodology which provide the alternative signal in case of unable guaranteed reliability of some instrumentation with KAERI. Proposed methodology is assumed that several instrumentations are working normally under the power supply condition because we do not consider the instrumentation survivability itself. Thus, concept of the Virtual Parameter Network (VPN) is used to identify the associations between plant parameters. This paper is extended version of the paper which was submitted last KNS meeting by changing the methodology and adding the result of the case study. In previous research, we used Artificial Neural Network (ANN) inferential technique for estimation model but every time this model showed different estimate value due to random bias each time. Therefore Auto-Associative Kernel Regression (AAKR) model which have same number of inputs and outputs is used to estimate. Also the importance measures in the previous method depend on estimation model but importance measure of improved method independent on estimation model. Also importance index of previous method depended on estimation model but importance index of improved method is independent on estimation model. In this study, we proposed the methodology to identify the internal state of power plant when severe accident happens also it has been validated through case study. SBLOCA which has large contribution to severe accident is considered as initiating event and relationship amongst parameter has been identified. VPN has ability to identify that which parameter has to be observed and which parameter can be alternative to the missing parameter when some instruments are failed in severe accident. In this study we have identified through results that commonly number 2, 3, 4 parameter has high connectivity while

  7. Methodology for Evaluating Safety System Operability using Virtual Parameter Network

    Energy Technology Data Exchange (ETDEWEB)

    Park, Sukyoung; Heo, Gyunyoung [Kyung Hee Univ., Yongin (Korea, Republic of); Kim, Jung Taek [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Kim, Tae Wan [Kepco International Nuclear Graduate School, Ulsan (Korea, Republic of)

    2014-05-15

    KAERI (Korea Atomic Energy Research Institute) and UTK (University of Tennessee Knoxville) are working on the I-NERI project to suggest complement of this problem. This research propose the methodology which provide the alternative signal in case of unable guaranteed reliability of some instrumentation with KAERI. Proposed methodology is assumed that several instrumentations are working normally under the power supply condition because we do not consider the instrumentation survivability itself. Thus, concept of the Virtual Parameter Network (VPN) is used to identify the associations between plant parameters. This paper is extended version of the paper which was submitted last KNS meeting by changing the methodology and adding the result of the case study. In previous research, we used Artificial Neural Network (ANN) inferential technique for estimation model but every time this model showed different estimate value due to random bias each time. Therefore Auto-Associative Kernel Regression (AAKR) model which have same number of inputs and outputs is used to estimate. Also the importance measures in the previous method depend on estimation model but importance measure of improved method independent on estimation model. Also importance index of previous method depended on estimation model but importance index of improved method is independent on estimation model. In this study, we proposed the methodology to identify the internal state of power plant when severe accident happens also it has been validated through case study. SBLOCA which has large contribution to severe accident is considered as initiating event and relationship amongst parameter has been identified. VPN has ability to identify that which parameter has to be observed and which parameter can be alternative to the missing parameter when some instruments are failed in severe accident. In this study we have identified through results that commonly number 2, 3, 4 parameter has high connectivity while

  8. Estimating Soil Hydraulic Parameters using Gradient Based Approach

    Science.gov (United States)

    Rai, P. K.; Tripathi, S.

    2017-12-01

    The conventional way of estimating parameters of a differential equation is to minimize the error between the observations and their estimates. The estimates are produced from forward solution (numerical or analytical) of differential equation assuming a set of parameters. Parameter estimation using the conventional approach requires high computational cost, setting-up of initial and boundary conditions, and formation of difference equations in case the forward solution is obtained numerically. Gaussian process based approaches like Gaussian Process Ordinary Differential Equation (GPODE) and Adaptive Gradient Matching (AGM) have been developed to estimate the parameters of Ordinary Differential Equations without explicitly solving them. Claims have been made that these approaches can straightforwardly be extended to Partial Differential Equations; however, it has been never demonstrated. This study extends AGM approach to PDEs and applies it for estimating parameters of Richards equation. Unlike the conventional approach, the AGM approach does not require setting-up of initial and boundary conditions explicitly, which is often difficult in real world application of Richards equation. The developed methodology was applied to synthetic soil moisture data. It was seen that the proposed methodology can estimate the soil hydraulic parameters correctly and can be a potential alternative to the conventional method.

  9. A new Bayesian recursive technique for parameter estimation

    Science.gov (United States)

    Kaheil, Yasir H.; Gill, M. Kashif; McKee, Mac; Bastidas, Luis

    2006-08-01

    The performance of any model depends on how well its associated parameters are estimated. In the current application, a localized Bayesian recursive estimation (LOBARE) approach is devised for parameter estimation. The LOBARE methodology is an extension of the Bayesian recursive estimation (BARE) method. It is applied in this paper on two different types of models: an artificial intelligence (AI) model in the form of a support vector machine (SVM) application for forecasting soil moisture and a conceptual rainfall-runoff (CRR) model represented by the Sacramento soil moisture accounting (SAC-SMA) model. Support vector machines, based on statistical learning theory (SLT), represent the modeling task as a quadratic optimization problem and have already been used in various applications in hydrology. They require estimation of three parameters. SAC-SMA is a very well known model that estimates runoff. It has a 13-dimensional parameter space. In the LOBARE approach presented here, Bayesian inference is used in an iterative fashion to estimate the parameter space that will most likely enclose a best parameter set. This is done by narrowing the sampling space through updating the "parent" bounds based on their fitness. These bounds are actually the parameter sets that were selected by BARE runs on subspaces of the initial parameter space. The new approach results in faster convergence toward the optimal parameter set using minimum training/calibration data and fewer sets of parameter values. The efficacy of the localized methodology is also compared with the previously used BARE algorithm.

  10. A methodology for modeling photocatalytic reactors for indoor pollution control using previously estimated kinetic parameters

    Energy Technology Data Exchange (ETDEWEB)

    Passalia, Claudio; Alfano, Orlando M. [INTEC - Instituto de Desarrollo Tecnologico para la Industria Quimica, CONICET - UNL, Gueemes 3450, 3000 Santa Fe (Argentina); FICH - Departamento de Medio Ambiente, Facultad de Ingenieria y Ciencias Hidricas, Universidad Nacional del Litoral, Ciudad Universitaria, 3000 Santa Fe (Argentina); Brandi, Rodolfo J., E-mail: rbrandi@santafe-conicet.gov.ar [INTEC - Instituto de Desarrollo Tecnologico para la Industria Quimica, CONICET - UNL, Gueemes 3450, 3000 Santa Fe (Argentina); FICH - Departamento de Medio Ambiente, Facultad de Ingenieria y Ciencias Hidricas, Universidad Nacional del Litoral, Ciudad Universitaria, 3000 Santa Fe (Argentina)

    2012-04-15

    Highlights: Black-Right-Pointing-Pointer Indoor pollution control via photocatalytic reactors. Black-Right-Pointing-Pointer Scaling-up methodology based on previously determined mechanistic kinetics. Black-Right-Pointing-Pointer Radiation interchange model between catalytic walls using configuration factors. Black-Right-Pointing-Pointer Modeling and experimental validation of a complex geometry photocatalytic reactor. - Abstract: A methodology for modeling photocatalytic reactors for their application in indoor air pollution control is carried out. The methodology implies, firstly, the determination of intrinsic reaction kinetics for the removal of formaldehyde. This is achieved by means of a simple geometry, continuous reactor operating under kinetic control regime and steady state. The kinetic parameters were estimated from experimental data by means of a nonlinear optimization algorithm. The second step was the application of the obtained kinetic parameters to a very different photoreactor configuration. In this case, the reactor is a corrugated wall type using nanosize TiO{sub 2} as catalyst irradiated by UV lamps that provided a spatially uniform radiation field. The radiative transfer within the reactor was modeled through a superficial emission model for the lamps, the ray tracing method and the computation of view factors. The velocity and concentration fields were evaluated by means of a commercial CFD tool (Fluent 12) where the radiation model was introduced externally. The results of the model were compared experimentally in a corrugated wall, bench scale reactor constructed in the laboratory. The overall pollutant conversion showed good agreement between model predictions and experiments, with a root mean square error less than 4%.

  11. Bias-Corrected Estimation of Noncentrality Parameters of Covariance Structure Models

    Science.gov (United States)

    Raykov, Tenko

    2005-01-01

    A bias-corrected estimator of noncentrality parameters of covariance structure models is discussed. The approach represents an application of the bootstrap methodology for purposes of bias correction, and utilizes the relation between average of resample conventional noncentrality parameter estimates and their sample counterpart. The…

  12. Parameter Estimation

    DEFF Research Database (Denmark)

    Sales-Cruz, Mauricio; Heitzig, Martina; Cameron, Ian

    2011-01-01

    of optimisation techniques coupled with dynamic solution of the underlying model. Linear and nonlinear approaches to parameter estimation are investigated. There is also the application of maximum likelihood principles in the estimation of parameters, as well as the use of orthogonal collocation to generate a set......In this chapter the importance of parameter estimation in model development is illustrated through various applications related to reaction systems. In particular, rate constants in a reaction system are obtained through parameter estimation methods. These approaches often require the application...... of algebraic equations as the basis for parameter estimation.These approaches are illustrated using estimations of kinetic constants from reaction system models....

  13. Perception-oriented methodology for robust motion estimation design

    NARCIS (Netherlands)

    Heinrich, A.; Vleuten, van der R.J.; Haan, de G.

    2014-01-01

    Optimizing a motion estimator (ME) for picture rate conversion is challenging. This is because there are many types of MEs and, within each type, many parameters, which makes subjective assessment of all the alternatives impractical. To solve this problem, we propose an automatic design methodology

  14. A Comprehensive Methodology for Development, Parameter Estimation, and Uncertainty Analysis of Group Contribution Based Property Models -An Application to the Heat of Combustion

    DEFF Research Database (Denmark)

    Frutiger, Jerome; Marcarie, Camille; Abildskov, Jens

    2016-01-01

    of the prediction. The methodology is evaluated through development of a GC method for the prediction of the heat of combustion (ΔHco) for pure components. The results showed that robust regression lead to best performance statistics for parameter estimation. The bootstrap method is found to be a valid alternative......A rigorous methodology is developed that addresses numerical and statistical issues when developing group contribution (GC) based property models such as regression methods, optimization algorithms, performance statistics, outlier treatment, parameter identifiability, and uncertainty...... identifiability issues, reporting of the 95% confidence intervals of the predicted property values should be mandatory as opposed to reporting only single value prediction, currently the norm in literature. Moreover, inclusion of higher order groups (additional parameters) does not always lead to improved...

  15. METHODOLOGY FOR THE ESTIMATION OF PARAMETERS, OF THE MODIFIED BOUC-WEN MODEL

    Directory of Open Access Journals (Sweden)

    Tomasz HANISZEWSKI

    2015-03-01

    Full Text Available Bouc-Wen model is theoretical formulation that allows to reflect real hysteresis loop of modeled object. Such object is for example a wire rope, which is present on equipment of crane lifting mechanism. Where adopted modified version of the model has nine parameters. Determination of such a number of parameters is complex and problematic issue. In this article are shown the methodology to identify and sample results of numerical simulations. The results were compared with data obtained on the basis of laboratory tests of ropes [3] and on their basis it was found that there is compliance between results and there is possibility to apply in dynamic systems containing in their structures wire ropes [4].

  16. Maximum profile likelihood estimation of differential equation parameters through model based smoothing state estimates.

    Science.gov (United States)

    Campbell, D A; Chkrebtii, O

    2013-12-01

    Statistical inference for biochemical models often faces a variety of characteristic challenges. In this paper we examine state and parameter estimation for the JAK-STAT intracellular signalling mechanism, which exemplifies the implementation intricacies common in many biochemical inference problems. We introduce an extension to the Generalized Smoothing approach for estimating delay differential equation models, addressing selection of complexity parameters, choice of the basis system, and appropriate optimization strategies. Motivated by the JAK-STAT system, we further extend the generalized smoothing approach to consider a nonlinear observation process with additional unknown parameters, and highlight how the approach handles unobserved states and unevenly spaced observations. The methodology developed is generally applicable to problems of estimation for differential equation models with delays, unobserved states, nonlinear observation processes, and partially observed histories. Crown Copyright © 2013. Published by Elsevier Inc. All rights reserved.

  17. A Modified Penalty Parameter Approach for Optimal Estimation of UH with Simultaneous Estimation of Infiltration Parameters

    Science.gov (United States)

    Bhattacharjya, Rajib Kumar

    2018-05-01

    The unit hydrograph and the infiltration parameters of a watershed can be obtained from observed rainfall-runoff data by using inverse optimization technique. This is a two-stage optimization problem. In the first stage, the infiltration parameters are obtained and the unit hydrograph ordinates are estimated in the second stage. In order to combine this two-stage method into a single stage one, a modified penalty parameter approach is proposed for converting the constrained optimization problem to an unconstrained one. The proposed approach is designed in such a way that the model initially obtains the infiltration parameters and then searches the optimal unit hydrograph ordinates. The optimization model is solved using Genetic Algorithms. A reduction factor is used in the penalty parameter approach so that the obtained optimal infiltration parameters are not destroyed during subsequent generation of genetic algorithms, required for searching optimal unit hydrograph ordinates. The performance of the proposed methodology is evaluated by using two example problems. The evaluation shows that the model is superior, simple in concept and also has the potential for field application.

  18. How to Select the most Relevant Roughness Parameters of a Surface: Methodology Research Strategy

    Science.gov (United States)

    Bobrovskij, I. N.

    2018-01-01

    In this paper, the foundations for new methodology creation which provides solving problem of surfaces structure new standards parameters huge amount conflicted with necessary actual floors quantity of surfaces structure parameters which is related to measurement complexity decreasing are considered. At the moment, there is no single assessment of the importance of a parameters. The approval of presented methodology for aerospace cluster components surfaces allows to create necessary foundation, to develop scientific estimation of surfaces texture parameters, to obtain material for investigators of chosen technological procedure. The methods necessary for further work, the creation of a fundamental reserve and development as a scientific direction for assessing the significance of microgeometry parameters are selected.

  19. Nonparametric estimation of location and scale parameters

    KAUST Repository

    Potgieter, C.J.

    2012-12-01

    Two random variables X and Y belong to the same location-scale family if there are constants μ and σ such that Y and μ+σX have the same distribution. In this paper we consider non-parametric estimation of the parameters μ and σ under minimal assumptions regarding the form of the distribution functions of X and Y. We discuss an approach to the estimation problem that is based on asymptotic likelihood considerations. Our results enable us to provide a methodology that can be implemented easily and which yields estimators that are often near optimal when compared to fully parametric methods. We evaluate the performance of the estimators in a series of Monte Carlo simulations. © 2012 Elsevier B.V. All rights reserved.

  20. Simultaneous identification of unknown groundwater pollution sources and estimation of aquifer parameters

    Science.gov (United States)

    Datta, Bithin; Chakrabarty, Dibakar; Dhar, Anirban

    2009-09-01

    Pollution source identification is a common problem encountered frequently. In absence of prior information about flow and transport parameters, the performance of source identification models depends on the accuracy in estimation of these parameters. A methodology is developed for simultaneous pollution source identification and parameter estimation in groundwater systems. The groundwater flow and transport simulator is linked to the nonlinear optimization model as an external module. The simulator defines the flow and transport processes, and serves as a binding equality constraint. The Jacobian matrix which determines the search direction in the nonlinear optimization model links the groundwater flow-transport simulator and the optimization method. Performance of the proposed methodology using spatiotemporal hydraulic head values and pollutant concentration measurements is evaluated by solving illustrative problems. Two different decision model formulations are developed. The computational efficiency of these models is compared using two nonlinear optimization algorithms. The proposed methodology addresses some of the computational limitations of using the embedded optimization technique which embeds the discretized flow and transport equations as equality constraints for optimization. Solution results obtained are also found to be better than those obtained using the embedded optimization technique. The performance evaluations reported here demonstrate the potential applicability of the developed methodology for a fairly large aquifer study area with multiple unknown pollution sources.

  1. Parameter estimation and determinability analysis applied to Drosophila gap gene circuits

    Directory of Open Access Journals (Sweden)

    Jaeger Johannes

    2008-09-01

    Full Text Available Abstract Background Mathematical modeling of real-life processes often requires the estimation of unknown parameters. Once the parameters are found by means of optimization, it is important to assess the quality of the parameter estimates, especially if parameter values are used to draw biological conclusions from the model. Results In this paper we describe how the quality of parameter estimates can be analyzed. We apply our methodology to assess parameter determinability for gene circuit models of the gap gene network in early Drosophila embryos. Conclusion Our analysis shows that none of the parameters of the considered model can be determined individually with reasonable accuracy due to correlations between parameters. Therefore, the model cannot be used as a tool to infer quantitative regulatory weights. On the other hand, our results show that it is still possible to draw reliable qualitative conclusions on the regulatory topology of the gene network. Moreover, it improves previous analyses of the same model by allowing us to identify those interactions for which qualitative conclusions are reliable, and those for which they are ambiguous.

  2. Selection of meteorological parameters affecting rainfall estimation using neuro-fuzzy computing methodology

    Science.gov (United States)

    Hashim, Roslan; Roy, Chandrabhushan; Motamedi, Shervin; Shamshirband, Shahaboddin; Petković, Dalibor; Gocic, Milan; Lee, Siew Cheng

    2016-05-01

    Rainfall is a complex atmospheric process that varies over time and space. Researchers have used various empirical and numerical methods to enhance estimation of rainfall intensity. We developed a novel prediction model in this study, with the emphasis on accuracy to identify the most significant meteorological parameters having effect on rainfall. For this, we used five input parameters: wet day frequency (dwet), vapor pressure (e̅a), and maximum and minimum air temperatures (Tmax and Tmin) as well as cloud cover (cc). The data were obtained from the Indian Meteorological Department for the Patna city, Bihar, India. Further, a type of soft-computing method, known as the adaptive-neuro-fuzzy inference system (ANFIS), was applied to the available data. In this respect, the observation data from 1901 to 2000 were employed for testing, validating, and estimating monthly rainfall via the simulated model. In addition, the ANFIS process for variable selection was implemented to detect the predominant variables affecting the rainfall prediction. Finally, the performance of the model was compared to other soft-computing approaches, including the artificial neural network (ANN), support vector machine (SVM), extreme learning machine (ELM), and genetic programming (GP). The results revealed that ANN, ELM, ANFIS, SVM, and GP had R2 of 0.9531, 0.9572, 0.9764, 0.9525, and 0.9526, respectively. Therefore, we conclude that the ANFIS is the best method among all to predict monthly rainfall. Moreover, dwet was found to be the most influential parameter for rainfall prediction, and the best predictor of accuracy. This study also identified sets of two and three meteorological parameters that show the best predictions.

  3. Errors and parameter estimation in precipitation-runoff modeling: 1. Theory

    Science.gov (United States)

    Troutman, Brent M.

    1985-01-01

    Errors in complex conceptual precipitation-runoff models may be analyzed by placing them into a statistical framework. This amounts to treating the errors as random variables and defining the probabilistic structure of the errors. By using such a framework, a large array of techniques, many of which have been presented in the statistical literature, becomes available to the modeler for quantifying and analyzing the various sources of error. A number of these techniques are reviewed in this paper, with special attention to the peculiarities of hydrologic models. Known methodologies for parameter estimation (calibration) are particularly applicable for obtaining physically meaningful estimates and for explaining how bias in runoff prediction caused by model error and input error may contribute to bias in parameter estimation.

  4. Estimation of common cause failure parameters with periodic tests

    Energy Technology Data Exchange (ETDEWEB)

    Barros, Anne [Institut Charles Delaunay - Universite de technologie de Troyes - FRE CNRS 2848, 12, rue Marie Curie - BP 2060 -10010 Troyes cedex (France)], E-mail: anne.barros@utt.fr; Grall, Antoine [Institut Charles Delaunay - Universite de technologie de Troyes - FRE CNRS 2848, 12, rue Marie Curie - BP 2060 -10010 Troyes cedex (France); Vasseur, Dominique [Electricite de France, EDF R and D - Industrial Risk Management Department 1, av. du General de Gaulle- 92141 Clamart (France)

    2009-04-15

    In the specific case of safety systems, CCF parameters estimators for standby components depend on the periodic test schemes. Classically, the testing schemes are either staggered (alternation of tests on redundant components) or non-staggered (all components are tested at the same time). In reality, periodic tests schemes performed on safety components are more complex and combine staggered tests, when the plant is in operation, to non-staggered tests during maintenance and refueling outage periods of the installation. Moreover, the CCF parameters estimators described in the US literature are derived in a consistent way with US Technical Specifications constraints that do not apply on the French Nuclear Power Plants for staggered tests on standby components. Given these issues, the evaluation of CCF parameters from the operating feedback data available within EDF implies the development of methodologies that integrate the testing schemes specificities. This paper aims to formally propose a solution for the estimation of CCF parameters given two distinct difficulties respectively related to a mixed testing scheme and to the consistency with EDF's specific practices inducing systematic non-simultaneity of the observed failures in a staggered testing scheme.

  5. Optomechanical parameter estimation

    International Nuclear Information System (INIS)

    Ang, Shan Zheng; Tsang, Mankei; Harris, Glen I; Bowen, Warwick P

    2013-01-01

    We propose a statistical framework for the problem of parameter estimation from a noisy optomechanical system. The Cramér–Rao lower bound on the estimation errors in the long-time limit is derived and compared with the errors of radiometer and expectation–maximization (EM) algorithms in the estimation of the force noise power. When applied to experimental data, the EM estimator is found to have the lowest error and follow the Cramér–Rao bound most closely. Our analytic results are envisioned to be valuable to optomechanical experiment design, while the EM algorithm, with its ability to estimate most of the system parameters, is envisioned to be useful for optomechanical sensing, atomic magnetometry and fundamental tests of quantum mechanics. (paper)

  6. CTER—Rapid estimation of CTF parameters with error assessment

    Energy Technology Data Exchange (ETDEWEB)

    Penczek, Pawel A., E-mail: Pawel.A.Penczek@uth.tmc.edu [Department of Biochemistry and Molecular Biology, The University of Texas Medical School, 6431 Fannin MSB 6.220, Houston, TX 77054 (United States); Fang, Jia [Department of Biochemistry and Molecular Biology, The University of Texas Medical School, 6431 Fannin MSB 6.220, Houston, TX 77054 (United States); Li, Xueming; Cheng, Yifan [The Keck Advanced Microscopy Laboratory, Department of Biochemistry and Biophysics, University of California, San Francisco, CA 94158 (United States); Loerke, Justus; Spahn, Christian M.T. [Institut für Medizinische Physik und Biophysik, Charité – Universitätsmedizin Berlin, Charitéplatz 1, 10117 Berlin (Germany)

    2014-05-01

    In structural electron microscopy, the accurate estimation of the Contrast Transfer Function (CTF) parameters, particularly defocus and astigmatism, is of utmost importance for both initial evaluation of micrograph quality and for subsequent structure determination. Due to increases in the rate of data collection on modern microscopes equipped with new generation cameras, it is also important that the CTF estimation can be done rapidly and with minimal user intervention. Finally, in order to minimize the necessity for manual screening of the micrographs by a user it is necessary to provide an assessment of the errors of fitted parameters values. In this work we introduce CTER, a CTF parameters estimation method distinguished by its computational efficiency. The efficiency of the method makes it suitable for high-throughput EM data collection, and enables the use of a statistical resampling technique, bootstrap, that yields standard deviations of estimated defocus and astigmatism amplitude and angle, thus facilitating the automation of the process of screening out inferior micrograph data. Furthermore, CTER also outputs the spatial frequency limit imposed by reciprocal space aliasing of the discrete form of the CTF and the finite window size. We demonstrate the efficiency and accuracy of CTER using a data set collected on a 300 kV Tecnai Polara (FEI) using the K2 Summit DED camera in super-resolution counting mode. Using CTER we obtained a structure of the 80S ribosome whose large subunit had a resolution of 4.03 Å without, and 3.85 Å with, inclusion of astigmatism parameters. - Highlights: • We describe methodology for estimation of CTF parameters with error assessment. • Error estimates provide means for automated elimination of inferior micrographs. • High computational efficiency allows real-time monitoring of EM data quality. • Accurate CTF estimation yields structure of the 80S human ribosome at 3.85 Å.

  7. Methodology for generating waste volume estimates

    International Nuclear Information System (INIS)

    Miller, J.Q.; Hale, T.; Miller, D.

    1991-09-01

    This document describes the methodology that will be used to calculate waste volume estimates for site characterization and remedial design/remedial action activities at each of the DOE Field Office, Oak Ridge (DOE-OR) facilities. This standardized methodology is designed to ensure consistency in waste estimating across the various sites and organizations that are involved in environmental restoration activities. The criteria and assumptions that are provided for generating these waste estimates will be implemented across all DOE-OR facilities and are subject to change based on comments received and actual waste volumes measured during future sampling and remediation activities. 7 figs., 8 tabs

  8. An Estimator of Heavy Tail Index through the Generalized Jackknife Methodology

    Directory of Open Access Journals (Sweden)

    Weiqi Liu

    2014-01-01

    Full Text Available In practice, sometimes the data can be divided into several blocks but only a few of the largest observations within each block are available to estimate the heavy tail index. To address this problem, we propose a new class of estimators through the Generalized Jackknife methodology based on Qi’s estimator (2010. These estimators are proved to be asymptotically normal under suitable conditions. Compared to Hill’s estimator and Qi’s estimator, our new estimator has better asymptotic efficiency in terms of the minimum mean squared error, for a wide range of the second order shape parameters. For the finite samples, our new estimator still compares favorably to Hill’s estimator and Qi’s estimator, providing stable sample paths as a function of the number of dividing the sample into blocks, smaller estimation bias, and MSE.

  9. Accuracy and sensitivity analysis on seismic anisotropy parameter estimation

    Science.gov (United States)

    Yan, Fuyong; Han, De-Hua

    2018-04-01

    There is significant uncertainty in measuring the Thomsen’s parameter δ in laboratory even though the dimensions and orientations of the rock samples are known. It is expected that more challenges will be encountered in the estimating of the seismic anisotropy parameters from field seismic data. Based on Monte Carlo simulation of vertical transversely isotropic layer cake model using the database of laboratory anisotropy measurement from the literature, we apply the commonly used quartic non-hyperbolic reflection moveout equation to estimate the seismic anisotropy parameters and test its accuracy and sensitivities to the source-receive offset, vertical interval velocity error and time picking error. The testing results show that the methodology works perfectly for noise-free synthetic data with short spread length. However, this method is extremely sensitive to the time picking error caused by mild random noises, and it requires the spread length to be greater than the depth of the reflection event. The uncertainties increase rapidly for the deeper layers and the estimated anisotropy parameters can be very unreliable for a layer with more than five overlain layers. It is possible that an isotropic formation can be misinterpreted as a strong anisotropic formation. The sensitivity analysis should provide useful guidance on how to group the reflection events and build a suitable geological model for anisotropy parameter inversion.

  10. A model-based initial guess for estimating parameters in systems of ordinary differential equations.

    Science.gov (United States)

    Dattner, Itai

    2015-12-01

    The inverse problem of parameter estimation from noisy observations is a major challenge in statistical inference for dynamical systems. Parameter estimation is usually carried out by optimizing some criterion function over the parameter space. Unless the optimization process starts with a good initial guess, the estimation may take an unreasonable amount of time, and may converge to local solutions, if at all. In this article, we introduce a novel technique for generating good initial guesses that can be used by any estimation method. We focus on the fairly general and often applied class of systems linear in the parameters. The new methodology bypasses numerical integration and can handle partially observed systems. We illustrate the performance of the method using simulations and apply it to real data. © 2015, The International Biometric Society.

  11. Applied parameter estimation for chemical engineers

    CERN Document Server

    Englezos, Peter

    2000-01-01

    Formulation of the parameter estimation problem; computation of parameters in linear models-linear regression; Gauss-Newton method for algebraic models; other nonlinear regression methods for algebraic models; Gauss-Newton method for ordinary differential equation (ODE) models; shortcut estimation methods for ODE models; practical guidelines for algorithm implementation; constrained parameter estimation; Gauss-Newton method for partial differential equation (PDE) models; statistical inferences; design of experiments; recursive parameter estimation; parameter estimation in nonlinear thermodynam

  12. Improved Estimates of Thermodynamic Parameters

    Science.gov (United States)

    Lawson, D. D.

    1982-01-01

    Techniques refined for estimating heat of vaporization and other parameters from molecular structure. Using parabolic equation with three adjustable parameters, heat of vaporization can be used to estimate boiling point, and vice versa. Boiling points and vapor pressures for some nonpolar liquids were estimated by improved method and compared with previously reported values. Technique for estimating thermodynamic parameters should make it easier for engineers to choose among candidate heat-exchange fluids for thermochemical cycles.

  13. The Application of Best Estimate and Uncertainty Analysis Methodology to Large LOCA Power Pulse in a CANDU 6 Reactor

    International Nuclear Information System (INIS)

    Abdul-Razzak, A.; Zhang, J.; Sills, H.E.; Flatt, L.; Jenkins, D.; Wallace, D.J.; Popov, N.

    2002-01-01

    The paper describes briefly a best estimate plus uncertainty analysis (BE+UA) methodology and presents its proto-typing application to the power pulse phase of a limiting large Loss-of-Coolant Accident (LOCA) for a CANDU 6 reactor fuelled with CANFLEX R fuel. The methodology is consistent with and builds on world practice. The analysis is divided into two phases to focus on the dominant parameters for each phase and to allow for the consideration of all identified highly ranked parameters in the statistical analysis and response surface fits for margin parameters. The objective of this analysis is to quantify improvements in predicted safety margins under best estimate conditions. (authors)

  14. Safety analysis methodology with assessment of the impact of the prediction errors of relevant parameters

    International Nuclear Information System (INIS)

    Galia, A.V.

    2011-01-01

    The best estimate plus uncertainty approach (BEAU) requires the use of extensive resources and therefore it is usually applied for cases in which the available safety margin obtained with a conservative methodology can be questioned. Outside the BEAU methodology, there is not a clear approach on how to deal with the issue of considering the uncertainties resulting from prediction errors in the safety analyses performed for licensing submissions. However, the regulatory document RD-310 mentions that the analysis method shall account for uncertainties in the analysis data and models. A possible approach is presented, that is simple and reasonable, representing just the author's views, to take into account the impact of prediction errors and other uncertainties when performing safety analysis in line with regulatory requirements. The approach proposes taking into account the prediction error of relevant parameters. Relevant parameters would be those plant parameters that are surveyed and are used to initiate the action of a mitigating system or those that are representative of the most challenging phenomena for the integrity of a fission barrier. Examples of the application of the methodology are presented involving a comparison between the results with the new approach and a best estimate calculation during the blowdown phase for two small breaks in a generic CANDU 6 station. The calculations are performed with the CATHENA computer code. (author)

  15. Robust estimation of hydrological model parameters

    Directory of Open Access Journals (Sweden)

    A. Bárdossy

    2008-11-01

    Full Text Available The estimation of hydrological model parameters is a challenging task. With increasing capacity of computational power several complex optimization algorithms have emerged, but none of the algorithms gives a unique and very best parameter vector. The parameters of fitted hydrological models depend upon the input data. The quality of input data cannot be assured as there may be measurement errors for both input and state variables. In this study a methodology has been developed to find a set of robust parameter vectors for a hydrological model. To see the effect of observational error on parameters, stochastically generated synthetic measurement errors were applied to observed discharge and temperature data. With this modified data, the model was calibrated and the effect of measurement errors on parameters was analysed. It was found that the measurement errors have a significant effect on the best performing parameter vector. The erroneous data led to very different optimal parameter vectors. To overcome this problem and to find a set of robust parameter vectors, a geometrical approach based on Tukey's half space depth was used. The depth of the set of N randomly generated parameters was calculated with respect to the set with the best model performance (Nash-Sutclife efficiency was used for this study for each parameter vector. Based on the depth of parameter vectors, one can find a set of robust parameter vectors. The results show that the parameters chosen according to the above criteria have low sensitivity and perform well when transfered to a different time period. The method is demonstrated on the upper Neckar catchment in Germany. The conceptual HBV model was used for this study.

  16. Estimation of effect of hydrogen on the parameters of magnetoacoustic emission signals

    Science.gov (United States)

    Skalskyi, Valentyn; Stankevych, Olena; Dubytskyi, Olexandr

    2018-05-01

    The features of the magnetoacoustic emission (MAE) signals during magnetization of structural steels with the different degree of hydrogenating were investigated by the wavelet transform. The dominant frequency ranges of MAE signals for the different magnetic field strength were determined using Discrete Wavelet Transform (DWT), and the energy and spectral parameters of MAE signals were determined using Continuous Wavelet Transform (CWT). The characteristic differences of the local maximums of signals according to energy, bandwidth, duration and frequency were found. The methodology of estimation of state of local degradation of materials by parameters of wavelet transform of MAE signals was proposed. This methodology was approbated for investigate of state of long-time exploitations structural steels of oil and gas pipelines.

  17. Aerodynamic Parameters of High Performance Aircraft Estimated from Wind Tunnel and Flight Test Data

    Science.gov (United States)

    Klein, Vladislav; Murphy, Patrick C.

    1998-01-01

    A concept of system identification applied to high performance aircraft is introduced followed by a discussion on the identification methodology. Special emphasis is given to model postulation using time invariant and time dependent aerodynamic parameters, model structure determination and parameter estimation using ordinary least squares an mixed estimation methods, At the same time problems of data collinearity detection and its assessment are discussed. These parts of methodology are demonstrated in examples using flight data of the X-29A and X-31A aircraft. In the third example wind tunnel oscillatory data of the F-16XL model are used. A strong dependence of these data on frequency led to the development of models with unsteady aerodynamic terms in the form of indicial functions. The paper is completed by concluding remarks.

  18. Comparison of sampling methodologies and estimation of population parameters for a temporary fish ectoparasite

    Directory of Open Access Journals (Sweden)

    J.M. Artim

    2016-08-01

    Full Text Available Characterizing spatio-temporal variation in the density of organisms in a community is a crucial part of ecological study. However, doing so for small, motile, cryptic species presents multiple challenges, especially where multiple life history stages are involved. Gnathiid isopods are ecologically important marine ectoparasites, micropredators that live in substrate for most of their lives, emerging only once during each juvenile stage to feed on fish blood. Many gnathiid species are nocturnal and most have distinct substrate preferences. Studies of gnathiid use of habitat, exploitation of hosts, and population dynamics have used various trap designs to estimate rates of gnathiid emergence, study sensory ecology, and identify host susceptibility. In the studies reported here, we compare and contrast the performance of emergence, fish-baited and light trap designs, outline the key features of these traps, and determine some life cycle parameters derived from trap counts for the Eastern Caribbean coral-reef gnathiid, Gnathia marleyi. We also used counts from large emergence traps and light traps to estimate additional life cycle parameters, emergence rates, and total gnathiid density on substrate, and to calibrate the light trap design to provide estimates of rate of emergence and total gnathiid density in habitat not amenable to emergence trap deployment.

  19. Ecient Parameter Estimation and Control Based on a Modified LOS Guidance System of an Underwater Vehicle

    Directory of Open Access Journals (Sweden)

    Elías Revestido Herrero

    2017-12-01

    Full Text Available In this work, a methodology is proposed for the improvement of the parameter estimation effciency of a non-linear manoeuvring model of a torpedo shaped unmanned underwater vehicle. For this purpose, data from different tests, were carried out with the aforementioned vehicle at the facilities of the Canal de Experiencias Hidrodinámicas del Pardo, Madrid. In the proposed methodology, the following aspects are taken into account in order to improve the parameter estimation effciency: selection of the sampling period, smoothing of the data acquired in the tests considering a compromise between variance and bias of the smoothing filter to be applied, analysis of the classical linear regression model proposed in each trial, from the statistical point of view for the estimation of the parameters. Improvements in effciency are verified by graphical and statistical methods. In addition, a modification of the conventional LOS method is proposed which provides satisfactory results in the presence of ocean currents by performing a simple procedure.

  20. Experimental design optimisation: theory and application to estimation of receptor model parameters using dynamic positron emission tomography

    International Nuclear Information System (INIS)

    Delforge, J.; Syrota, A.; Mazoyer, B.M.

    1989-01-01

    General framework and various criteria for experimental design optimisation are presented. The methodology is applied to estimation of receptor-ligand reaction model parameters with dynamic positron emission tomography data. The possibility of improving parameter estimation using a new experimental design combining an injection of the β + -labelled ligand and an injection of the cold ligand is investigated. Numerical simulations predict remarkable improvement in the accuracy of parameter estimates with this new experimental design and particularly the possibility of separate estimations of the association constant (k +1 ) and of receptor density (B' max ) in a single experiment. Simulation predictions are validated using experimental PET data in which parameter uncertainties are reduced by factors ranging from 17 to 1000. (author)

  1. Vibration Suppression for Improving the Estimation of Kinematic Parameters on Industrial Robots

    Directory of Open Access Journals (Sweden)

    David Alejandro Elvira-Ortiz

    2016-01-01

    Full Text Available Vibration is a phenomenon that is present on every industrial system such as CNC machines and industrial robots. Moreover, sensors used to estimate angular position of a joint in an industrial robot are severely affected by vibrations and lead to wrong estimations. This paper proposes a methodology for improving the estimation of kinematic parameters on industrial robots through a proper suppression of the vibration components present on signals acquired from two primary sensors: accelerometer and gyroscope. A Kalman filter is responsible for the filtering of spurious vibration. Additionally, a sensor fusion technique is used to merge information from both sensors and improve the results obtained using each sensor separately. The methodology is implemented in a proprietary hardware signal processor and tested in an ABB IRB 140 industrial robot, first by analyzing the motion profile of only one joint and then by estimating the path tracking of two welding tasks: one rectangular and another one circular. Results from this work prove that the sensor fusion technique accompanied by proper suppression of vibrations delivers better estimation than other proposed techniques.

  2. Air quality estimation by computational intelligence methodologies

    Directory of Open Access Journals (Sweden)

    Ćirić Ivan T.

    2012-01-01

    Full Text Available The subject of this study is to compare different computational intelligence methodologies based on artificial neural networks used for forecasting an air quality parameter - the emission of CO2, in the city of Niš. Firstly, inputs of the CO2 emission estimator are analyzed and their measurement is explained. It is known that the traffic is the single largest emitter of CO2 in Europe. Therefore, a proper treatment of this component of pollution is very important for precise estimation of emission levels. With this in mind, measurements of traffic frequency and CO2 concentration were carried out at critical intersections in the city, as well as the monitoring of a vehicle direction at the crossroad. Finally, based on experimental data, different soft computing estimators were developed, such as feed forward neural network, recurrent neural network, and hybrid neuro-fuzzy estimator of CO2 emission levels. Test data for some characteristic cases presented at the end of the paper shows good agreement of developed estimator outputs with experimental data. Presented results are a true indicator of the implemented method usability. [Projekat Ministarstva nauke Republike Srbije, br. III42008-2/2011: Evaluation of Energy Performances and br. TR35016/2011: Indoor Environment Quality of Educational Buildings in Serbia with Impact to Health and Research of MHD Flows around the Bodies, in the Tip Clearances and Channels and Application in the MHD Pumps Development

  3. Parameter estimation in plasmonic QED

    Science.gov (United States)

    Jahromi, H. Rangani

    2018-03-01

    We address the problem of parameter estimation in the presence of plasmonic modes manipulating emitted light via the localized surface plasmons in a plasmonic waveguide at the nanoscale. The emitter that we discuss is the nitrogen vacancy centre (NVC) in diamond modelled as a qubit. Our goal is to estimate the β factor measuring the fraction of emitted energy captured by waveguide surface plasmons. The best strategy to obtain the most accurate estimation of the parameter, in terms of the initial state of the probes and different control parameters, is investigated. In particular, for two-qubit estimation, it is found although we may achieve the best estimation at initial instants by using the maximally entangled initial states, at long times, the optimal estimation occurs when the initial state of the probes is a product one. We also find that decreasing the interqubit distance or increasing the propagation length of the plasmons improve the precision of the estimation. Moreover, decrease of spontaneous emission rate of the NVCs retards the quantum Fisher information (QFI) reduction and therefore the vanishing of the QFI, measuring the precision of the estimation, is delayed. In addition, if the phase parameter of the initial state of the two NVCs is equal to πrad, the best estimation with the two-qubit system is achieved when initially the NVCs are maximally entangled. Besides, the one-qubit estimation has been also analysed in detail. Especially, we show that, using a two-qubit probe, at any arbitrary time, enhances considerably the precision of estimation in comparison with one-qubit estimation.

  4. Parameter Estimation in Continuous Time Domain

    Directory of Open Access Journals (Sweden)

    Gabriela M. ATANASIU

    2016-12-01

    Full Text Available This paper will aim to presents the applications of a continuous-time parameter estimation method for estimating structural parameters of a real bridge structure. For the purpose of illustrating this method two case studies of a bridge pile located in a highly seismic risk area are considered, for which the structural parameters for the mass, damping and stiffness are estimated. The estimation process is followed by the validation of the analytical results and comparison with them to the measurement data. Further benefits and applications for the continuous-time parameter estimation method in civil engineering are presented in the final part of this paper.

  5. ESTIMATION ACCURACY OF EXPONENTIAL DISTRIBUTION PARAMETERS

    Directory of Open Access Journals (Sweden)

    muhammad zahid rashid

    2011-04-01

    Full Text Available The exponential distribution is commonly used to model the behavior of units that have a constant failure rate. The two-parameter exponential distribution provides a simple but nevertheless useful model for the analysis of lifetimes, especially when investigating reliability of technical equipment.This paper is concerned with estimation of parameters of the two parameter (location and scale exponential distribution. We used the least squares method (LSM, relative least squares method (RELS, ridge regression method (RR,  moment estimators (ME, modified moment estimators (MME, maximum likelihood estimators (MLE and modified maximum likelihood estimators (MMLE. We used the mean square error MSE, and total deviation TD, as measurement for the comparison between these methods. We determined the best method for estimation using different values for the parameters and different sample sizes

  6. A Central Composite Face-Centered Design for Parameters Estimation of PEM Fuel Cell Electrochemical Model

    Directory of Open Access Journals (Sweden)

    Khaled MAMMAR

    2013-11-01

    Full Text Available In this paper, a new approach based on Experimental of design methodology (DoE is used to estimate the optimal of unknown model parameters proton exchange membrane fuel cell (PEMFC. This proposed approach combines the central composite face-centered (CCF and numerical PEMFC electrochemical. Simulation results obtained using electrochemical model help to predict the cell voltage in terms of inlet partial pressures of hydrogen and oxygen, stack temperature, and operating current. The value of the previous model and (CCF design methodology is used for parametric analysis of electrochemical model. Thus it is possible to evaluate the relative importance of each parameter to the simulation accuracy. However this methodology is able to define the exact values of the parameters from the manufacture data. It was tested for the BCS 500-W stack PEM Generator, a stack rated at 500 W, manufactured by American Company BCS Technologies FC.

  7. Bayesian Parameter Estimation for Heavy-Duty Vehicles

    Energy Technology Data Exchange (ETDEWEB)

    Miller, Eric; Konan, Arnaud; Duran, Adam

    2017-03-28

    Accurate vehicle parameters are valuable for design, modeling, and reporting. Estimating vehicle parameters can be a very time-consuming process requiring tightly-controlled experimentation. This work describes a method to estimate vehicle parameters such as mass, coefficient of drag/frontal area, and rolling resistance using data logged during standard vehicle operation. The method uses Monte Carlo to generate parameter sets which is fed to a variant of the road load equation. Modeled road load is then compared to measured load to evaluate the probability of the parameter set. Acceptance of a proposed parameter set is determined using the probability ratio to the current state, so that the chain history will give a distribution of parameter sets. Compared to a single value, a distribution of possible values provides information on the quality of estimates and the range of possible parameter values. The method is demonstrated by estimating dynamometer parameters. Results confirm the method's ability to estimate reasonable parameter sets, and indicates an opportunity to increase the certainty of estimates through careful selection or generation of the test drive cycle.

  8. Methodologies for Quantitative Systems Pharmacology (QSP) Models: Design and Estimation.

    Science.gov (United States)

    Ribba, B; Grimm, H P; Agoram, B; Davies, M R; Gadkar, K; Niederer, S; van Riel, N; Timmis, J; van der Graaf, P H

    2017-08-01

    With the increased interest in the application of quantitative systems pharmacology (QSP) models within medicine research and development, there is an increasing need to formalize model development and verification aspects. In February 2016, a workshop was held at Roche Pharma Research and Early Development to focus discussions on two critical methodological aspects of QSP model development: optimal structural granularity and parameter estimation. We here report in a perspective article a summary of presentations and discussions. © 2017 The Authors CPT: Pharmacometrics & Systems Pharmacology published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.

  9. Maximum likelihood estimation of signal detection model parameters for the assessment of two-stage diagnostic strategies.

    Science.gov (United States)

    Lirio, R B; Dondériz, I C; Pérez Abalo, M C

    1992-08-01

    The methodology of Receiver Operating Characteristic curves based on the signal detection model is extended to evaluate the accuracy of two-stage diagnostic strategies. A computer program is developed for the maximum likelihood estimation of parameters that characterize the sensitivity and specificity of two-stage classifiers according to this extended methodology. Its use is briefly illustrated with data collected in a two-stage screening for auditory defects.

  10. Cost estimating for CERCLA remedial alternatives a unit cost methodology

    International Nuclear Information System (INIS)

    Brettin, R.W.; Carr, D.J.; Janke, R.J.

    1995-06-01

    The United States Environmental Protection Agency (EPA) Guidance for Conducting Remedial Investigations and Feasibility Studies Under CERCLA, Interim Final, dated October 1988 (EPA 1988) requires a detailed analysis be conducted of the most promising remedial alternatives against several evaluation criteria, including cost. To complete the detailed analysis, order-of-magnitude cost estimates (having an accuracy of +50 percent to -30 percent) must be developed for each remedial alternative. This paper presents a methodology for developing cost estimates of remedial alternatives comprised of various technology and process options with a wide range of estimated contaminated media quantities. In addition, the cost estimating methodology provides flexibility for incorporating revisions to remedial alternatives and achieves the desired range of accuracy. It is important to note that the cost estimating methodology presented here was developed as a concurrent path to the development of contaminated media quantity estimates. This methodology can be initiated before contaminated media quantities are estimated. As a result, this methodology is useful in developing cost estimates for use in screening and evaluating remedial technologies and process options. However, remedial alternative cost estimates cannot be prepared without the contaminated media quantity estimates. In the conduct of the feasibility study for Operable Unit 5 at the Fernald Environmental Management Project (FEMP), fourteen remedial alternatives were retained for detailed analysis. Each remedial alternative was composed of combinations of remedial technologies and processes which were earlier determined to be best suited for addressing the media-specific contaminants found at the FEMP site, and achieving desired remedial action objectives

  11. Parameter estimation for lithium ion batteries

    Science.gov (United States)

    Santhanagopalan, Shriram

    road conditions is important. An algorithm to predict the SOC in time intervals as small as 5 ms is of critical demand. In such cases, the conventional non-linear estimation procedure is not time-effective. There exist methodologies in the literature, such as those based on fuzzy logic; however, these techniques require a lot of computational storage space. Consequently, it is not possible to implement such techniques on a micro-chip for integration as a part of a real-time device. The Extended Kalman Filter (EKF) based approach presented in this work is a first step towards developing an efficient method to predict online, the State of Charge of a lithium ion cell based on an electrochemical model. The final part of the dissertation focuses on incorporating uncertainty in parameter values into electrochemical models using the polynomial chaos theory (PCT).

  12. Model methodology for estimating pesticide concentration extremes based on sparse monitoring data

    Science.gov (United States)

    Vecchia, Aldo V.

    2018-03-22

    This report describes a new methodology for using sparse (weekly or less frequent observations) and potentially highly censored pesticide monitoring data to simulate daily pesticide concentrations and associated quantities used for acute and chronic exposure assessments, such as the annual maximum daily concentration. The new methodology is based on a statistical model that expresses log-transformed daily pesticide concentration in terms of a seasonal wave, flow-related variability, long-term trend, and serially correlated errors. Methods are described for estimating the model parameters, generating conditional simulations of daily pesticide concentration given sparse (weekly or less frequent) and potentially highly censored observations, and estimating concentration extremes based on the conditional simulations. The model can be applied to datasets with as few as 3 years of record, as few as 30 total observations, and as few as 10 uncensored observations. The model was applied to atrazine, carbaryl, chlorpyrifos, and fipronil data for U.S. Geological Survey pesticide sampling sites with sufficient data for applying the model. A total of 112 sites were analyzed for atrazine, 38 for carbaryl, 34 for chlorpyrifos, and 33 for fipronil. The results are summarized in this report; and, R functions, described in this report and provided in an accompanying model archive, can be used to fit the model parameters and generate conditional simulations of daily concentrations for use in investigations involving pesticide exposure risk and uncertainty.

  13. Multi-objective optimization in quantum parameter estimation

    Science.gov (United States)

    Gong, BeiLi; Cui, Wei

    2018-04-01

    We investigate quantum parameter estimation based on linear and Kerr-type nonlinear controls in an open quantum system, and consider the dissipation rate as an unknown parameter. We show that while the precision of parameter estimation is improved, it usually introduces a significant deformation to the system state. Moreover, we propose a multi-objective model to optimize the two conflicting objectives: (1) maximizing the Fisher information, improving the parameter estimation precision, and (2) minimizing the deformation of the system state, which maintains its fidelity. Finally, simulations of a simplified ɛ-constrained model demonstrate the feasibility of the Hamiltonian control in improving the precision of the quantum parameter estimation.

  14. On-Line Flutter Prediction Tool for Wind Tunnel Flutter Testing using Parameter Varying Estimation Methodology, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — ZONA Technology, Inc. (ZONA) proposes to develop an on-line flutter prediction tool for wind tunnel model using the parameter varying estimation (PVE) technique to...

  15. Health Parameter Estimation with Second-Order Sliding Mode Observer for a Turbofan Engine

    Directory of Open Access Journals (Sweden)

    Xiaodong Chang

    2017-07-01

    Full Text Available In this paper the problem of health parameter estimation in an aero-engine is investigated by using an unknown input observer-based methodology, implemented by a second-order sliding mode observer (SOSMO. Unlike the conventional state estimator-based schemes, such as Kalman filters (KF and sliding mode observers (SMO, the proposed scheme uses a “reconstruction signal” to estimate health parameters modeled as artificial inputs, and is not only applicable to long-time health degradation, but reacts much quicker in handling abrupt fault cases. In view of the inevitable uncertainties in engine dynamics and modeling, a weighting matrix is created to minimize such effect on estimation by using the linear matrix inequalities (LMI. A big step toward uncertainty modeling is taken compared with our previous SMO-based work, in that uncertainties are considered in a more practical form. Moreover, to avoid chattering in sliding modes, the super-twisting algorithm (STA is employed in observer design. Various simulations are carried out, based on the comparisons between the KF-based scheme, the SMO-based scheme in our earlier research, and the proposed method. The results consistently demonstrate the capabilities and advantages of the proposed approach in health parameter estimation.

  16. A Simulation-Based Soft Error Estimation Methodology for Computer Systems

    OpenAIRE

    Sugihara, Makoto; Ishihara, Tohru; Hashimoto, Koji; Muroyama, Masanori

    2006-01-01

    This paper proposes a simulation-based soft error estimation methodology for computer systems. Accumulating soft error rates (SERs) of all memories in a computer system results in pessimistic soft error estimation. This is because memory cells are used spatially and temporally and not all soft errors in them make the computer system faulty. Our soft-error estimation methodology considers the locations and the timings of soft errors occurring at every level of memory hierarchy and estimates th...

  17. On parameter estimation in deformable models

    DEFF Research Database (Denmark)

    Fisker, Rune; Carstensen, Jens Michael

    1998-01-01

    Deformable templates have been intensively studied in image analysis through the last decade, but despite its significance the estimation of model parameters has received little attention. We present a method for supervised and unsupervised model parameter estimation using a general Bayesian form...

  18. Application of spreadsheet to estimate infiltration parameters

    Directory of Open Access Journals (Sweden)

    Mohammad Zakwan

    2016-09-01

    Full Text Available Infiltration is the process of flow of water into the ground through the soil surface. Soil water although contributes a negligible fraction of total water present on earth surface, but is of utmost importance for plant life. Estimation of infiltration rates is of paramount importance for estimation of effective rainfall, groundwater recharge, and designing of irrigation systems. Numerous infiltration models are in use for estimation of infiltration rates. The conventional graphical approach for estimation of infiltration parameters often fails to estimate the infiltration parameters precisely. The generalised reduced gradient (GRG solver is reported to be a powerful tool for estimating parameters of nonlinear equations and it has, therefore, been implemented to estimate the infiltration parameters in the present paper. Field data of infiltration rate available in literature for sandy loam soils of Umuahia, Nigeria were used to evaluate the performance of GRG solver. A comparative study of graphical method and GRG solver shows that the performance of GRG solver is better than that of conventional graphical method for estimation of infiltration rates. Further, the performance of Kostiakov model has been found to be better than the Horton and Philip's model in most of the cases based on both the approaches of parameter estimation.

  19. Parameter Estimation of Partial Differential Equation Models.

    Science.gov (United States)

    Xun, Xiaolei; Cao, Jiguo; Mallick, Bani; Carroll, Raymond J; Maity, Arnab

    2013-01-01

    Partial differential equation (PDE) models are commonly used to model complex dynamic systems in applied sciences such as biology and finance. The forms of these PDE models are usually proposed by experts based on their prior knowledge and understanding of the dynamic system. Parameters in PDE models often have interesting scientific interpretations, but their values are often unknown, and need to be estimated from the measurements of the dynamic system in the present of measurement errors. Most PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus the computational load is high. In this article, we propose two methods to estimate parameters in PDE models: a parameter cascading method and a Bayesian approach. In both methods, the underlying dynamic process modeled with the PDE model is represented via basis function expansion. For the parameter cascading method, we develop two nested levels of optimization to estimate the PDE parameters. For the Bayesian method, we develop a joint model for data and the PDE, and develop a novel hierarchical model allowing us to employ Markov chain Monte Carlo (MCMC) techniques to make posterior inference. Simulation studies show that the Bayesian method and parameter cascading method are comparable, and both outperform other available methods in terms of estimation accuracy. The two methods are demonstrated by estimating parameters in a PDE model from LIDAR data.

  20. Sensitivity of probability-of-failure estimates with respect to probability of detection curve parameters

    Energy Technology Data Exchange (ETDEWEB)

    Garza, J. [University of Texas at San Antonio, Mechanical Engineering, 1 UTSA circle, EB 3.04.50, San Antonio, TX 78249 (United States); Millwater, H., E-mail: harry.millwater@utsa.edu [University of Texas at San Antonio, Mechanical Engineering, 1 UTSA circle, EB 3.04.50, San Antonio, TX 78249 (United States)

    2012-04-15

    A methodology has been developed and demonstrated that can be used to compute the sensitivity of the probability-of-failure (POF) with respect to the parameters of inspection processes that are simulated using probability of detection (POD) curves. The formulation is such that the probabilistic sensitivities can be obtained at negligible cost using sampling methods by reusing the samples used to compute the POF. As a result, the methodology can be implemented for negligible cost in a post-processing non-intrusive manner thereby facilitating implementation with existing or commercial codes. The formulation is generic and not limited to any specific random variables, fracture mechanics formulation, or any specific POD curve as long as the POD is modeled parametrically. Sensitivity estimates for the cases of different POD curves at multiple inspections, and the same POD curves at multiple inspections have been derived. Several numerical examples are presented and show excellent agreement with finite difference estimates with significant computational savings. - Highlights: Black-Right-Pointing-Pointer Sensitivity of the probability-of-failure with respect to the probability-of-detection curve. Black-Right-Pointing-Pointer The sensitivities are computed with negligible cost using Monte Carlo sampling. Black-Right-Pointing-Pointer The change in the POF due to a change in the POD curve parameters can be easily estimated.

  1. Sensitivity of probability-of-failure estimates with respect to probability of detection curve parameters

    International Nuclear Information System (INIS)

    Garza, J.; Millwater, H.

    2012-01-01

    A methodology has been developed and demonstrated that can be used to compute the sensitivity of the probability-of-failure (POF) with respect to the parameters of inspection processes that are simulated using probability of detection (POD) curves. The formulation is such that the probabilistic sensitivities can be obtained at negligible cost using sampling methods by reusing the samples used to compute the POF. As a result, the methodology can be implemented for negligible cost in a post-processing non-intrusive manner thereby facilitating implementation with existing or commercial codes. The formulation is generic and not limited to any specific random variables, fracture mechanics formulation, or any specific POD curve as long as the POD is modeled parametrically. Sensitivity estimates for the cases of different POD curves at multiple inspections, and the same POD curves at multiple inspections have been derived. Several numerical examples are presented and show excellent agreement with finite difference estimates with significant computational savings. - Highlights: ► Sensitivity of the probability-of-failure with respect to the probability-of-detection curve. ►The sensitivities are computed with negligible cost using Monte Carlo sampling. ► The change in the POF due to a change in the POD curve parameters can be easily estimated.

  2. Experimental design for estimating parameters of rate-limited mass transfer: Analysis of stream tracer studies

    Science.gov (United States)

    Wagner, Brian J.; Harvey, Judson W.

    1997-01-01

    Tracer experiments are valuable tools for analyzing the transport characteristics of streams and their interactions with shallow groundwater. The focus of this work is the design of tracer studies in high-gradient stream systems subject to advection, dispersion, groundwater inflow, and exchange between the active channel and zones in surface or subsurface water where flow is stagnant or slow moving. We present a methodology for (1) evaluating and comparing alternative stream tracer experiment designs and (2) identifying those combinations of stream transport properties that pose limitations to parameter estimation and therefore a challenge to tracer test design. The methodology uses the concept of global parameter uncertainty analysis, which couples solute transport simulation with parameter uncertainty analysis in a Monte Carlo framework. Two general conclusions resulted from this work. First, the solute injection and sampling strategy has an important effect on the reliability of transport parameter estimates. We found that constant injection with sampling through concentration rise, plateau, and fall provided considerably more reliable parameter estimates than a pulse injection across the spectrum of transport scenarios likely encountered in high-gradient streams. Second, for a given tracer test design, the uncertainties in mass transfer and storage-zone parameter estimates are strongly dependent on the experimental Damkohler number, DaI, which is a dimensionless combination of the rates of exchange between the stream and storage zones, the stream-water velocity, and the stream reach length of the experiment. Parameter uncertainties are lowest at DaI values on the order of 1.0. When DaI values are much less than 1.0 (owing to high velocity, long exchange timescale, and/or short reach length), parameter uncertainties are high because only a small amount of tracer interacts with storage zones in the reach. For the opposite conditions (DaI ≫ 1.0), solute

  3. Methodology development for estimating support behavior of spacer grid spring in core

    International Nuclear Information System (INIS)

    Yoon, Kyung Ho; Kang, Heung Seok; Kim, Hyung Kyu; Song, Kee Nam

    1998-04-01

    The fuel rod (FR) support behavior is changed during operation resulting from effects such as clad creep-down, spring force relaxation due to irradiation, and irradiation growth of spacer straps in accordance with time or increase of burnup. The FR support behavior is closely associated with time or increase of burnup. The FR support behavior is closely associated with FR damage due to fretting, therefore the analysis on the FR support behavior is normally required to minimize the damage. The characteristics of the parameters, which affect the FR support behavior, and the methodology developed for estimating the FR support behavior in the reactor core are described in this work. The FR support condition for the KOFA (KOrean Fuel Assembly) fuel has been analyzed by this method, and the results of the analysis show that the fuel failure due to the fuel rod fretting wear is closely related to the support behavior of FR in the core. Therefore, the present methodology for estimating the FR support condition seems to be useful for estimating the actual FR support condition. In addition, the optimization seems to be a reliable tool for establishing the optimal support condition on the basis of these results. (author). 15 refs., 3 tabs., 26 figs

  4. Precision Parameter Estimation and Machine Learning

    Science.gov (United States)

    Wandelt, Benjamin D.

    2008-12-01

    I discuss the strategy of ``Acceleration by Parallel Precomputation and Learning'' (AP-PLe) that can vastly accelerate parameter estimation in high-dimensional parameter spaces and costly likelihood functions, using trivially parallel computing to speed up sequential exploration of parameter space. This strategy combines the power of distributed computing with machine learning and Markov-Chain Monte Carlo techniques efficiently to explore a likelihood function, posterior distribution or χ2-surface. This strategy is particularly successful in cases where computing the likelihood is costly and the number of parameters is moderate or large. We apply this technique to two central problems in cosmology: the solution of the cosmological parameter estimation problem with sufficient accuracy for the Planck data using PICo; and the detailed calculation of cosmological helium and hydrogen recombination with RICO. Since the APPLe approach is designed to be able to use massively parallel resources to speed up problems that are inherently serial, we can bring the power of distributed computing to bear on parameter estimation problems. We have demonstrated this with the CosmologyatHome project.

  5. Reionization history and CMB parameter estimation

    International Nuclear Information System (INIS)

    Dizgah, Azadeh Moradinezhad; Kinney, William H.; Gnedin, Nickolay Y.

    2013-01-01

    We study how uncertainty in the reionization history of the universe affects estimates of other cosmological parameters from the Cosmic Microwave Background. We analyze WMAP7 data and synthetic Planck-quality data generated using a realistic scenario for the reionization history of the universe obtained from high-resolution numerical simulation. We perform parameter estimation using a simple sudden reionization approximation, and using the Principal Component Analysis (PCA) technique proposed by Mortonson and Hu. We reach two main conclusions: (1) Adopting a simple sudden reionization model does not introduce measurable bias into values for other parameters, indicating that detailed modeling of reionization is not necessary for the purpose of parameter estimation from future CMB data sets such as Planck. (2) PCA analysis does not allow accurate reconstruction of the actual reionization history of the universe in a realistic case

  6. Reionization history and CMB parameter estimation

    Energy Technology Data Exchange (ETDEWEB)

    Dizgah, Azadeh Moradinezhad; Gnedin, Nickolay Y.; Kinney, William H.

    2013-05-01

    We study how uncertainty in the reionization history of the universe affects estimates of other cosmological parameters from the Cosmic Microwave Background. We analyze WMAP7 data and synthetic Planck-quality data generated using a realistic scenario for the reionization history of the universe obtained from high-resolution numerical simulation. We perform parameter estimation using a simple sudden reionization approximation, and using the Principal Component Analysis (PCA) technique proposed by Mortonson and Hu. We reach two main conclusions: (1) Adopting a simple sudden reionization model does not introduce measurable bias into values for other parameters, indicating that detailed modeling of reionization is not necessary for the purpose of parameter estimation from future CMB data sets such as Planck. (2) PCA analysis does not allow accurate reconstruction of the actual reionization history of the universe in a realistic case.

  7. Parameter Estimation of Partial Differential Equation Models

    KAUST Repository

    Xun, Xiaolei

    2013-09-01

    Partial differential equation (PDE) models are commonly used to model complex dynamic systems in applied sciences such as biology and finance. The forms of these PDE models are usually proposed by experts based on their prior knowledge and understanding of the dynamic system. Parameters in PDE models often have interesting scientific interpretations, but their values are often unknown and need to be estimated from the measurements of the dynamic system in the presence of measurement errors. Most PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus the computational load is high. In this article, we propose two methods to estimate parameters in PDE models: a parameter cascading method and a Bayesian approach. In both methods, the underlying dynamic process modeled with the PDE model is represented via basis function expansion. For the parameter cascading method, we develop two nested levels of optimization to estimate the PDE parameters. For the Bayesian method, we develop a joint model for data and the PDE and develop a novel hierarchical model allowing us to employ Markov chain Monte Carlo (MCMC) techniques to make posterior inference. Simulation studies show that the Bayesian method and parameter cascading method are comparable, and both outperform other available methods in terms of estimation accuracy. The two methods are demonstrated by estimating parameters in a PDE model from long-range infrared light detection and ranging data. Supplementary materials for this article are available online. © 2013 American Statistical Association.

  8. Parameter Estimation for Thurstone Choice Models

    Energy Technology Data Exchange (ETDEWEB)

    Vojnovic, Milan [London School of Economics (United Kingdom); Yun, Seyoung [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-04-24

    We consider the estimation accuracy of individual strength parameters of a Thurstone choice model when each input observation consists of a choice of one item from a set of two or more items (so called top-1 lists). This model accommodates the well-known choice models such as the Luce choice model for comparison sets of two or more items and the Bradley-Terry model for pair comparisons. We provide a tight characterization of the mean squared error of the maximum likelihood parameter estimator. We also provide similar characterizations for parameter estimators defined by a rank-breaking method, which amounts to deducing one or more pair comparisons from a comparison of two or more items, assuming independence of these pair comparisons, and maximizing a likelihood function derived under these assumptions. We also consider a related binary classification problem where each individual parameter takes value from a set of two possible values and the goal is to correctly classify all items within a prescribed classification error. The results of this paper shed light on how the parameter estimation accuracy depends on given Thurstone choice model and the structure of comparison sets. In particular, we found that for unbiased input comparison sets of a given cardinality, when in expectation each comparison set of given cardinality occurs the same number of times, for a broad class of Thurstone choice models, the mean squared error decreases with the cardinality of comparison sets, but only marginally according to a diminishing returns relation. On the other hand, we found that there exist Thurstone choice models for which the mean squared error of the maximum likelihood parameter estimator can decrease much faster with the cardinality of comparison sets. We report empirical evaluation of some claims and key parameters revealed by theory using both synthetic and real-world input data from some popular sport competitions and online labor platforms.

  9. CONTAMINATED SOIL VOLUME ESTIMATE TRACKING METHODOLOGY

    International Nuclear Information System (INIS)

    Durham, L.A.; Johnson, R.L.; Rieman, C.; Kenna, T.; Pilon, R.

    2003-01-01

    The U.S. Army Corps of Engineers (USACE) is conducting a cleanup of radiologically contaminated properties under the Formerly Utilized Sites Remedial Action Program (FUSRAP). The largest cost element for most of the FUSRAP sites is the transportation and disposal of contaminated soil. Project managers and engineers need an estimate of the volume of contaminated soil to determine project costs and schedule. Once excavation activities begin and additional remedial action data are collected, the actual quantity of contaminated soil often deviates from the original estimate, resulting in cost and schedule impacts to the project. The project costs and schedule need to be frequently updated by tracking the actual quantities of excavated soil and contaminated soil remaining during the life of a remedial action project. A soil volume estimate tracking methodology was developed to provide a mechanism for project managers and engineers to create better project controls of costs and schedule. For the FUSRAP Linde site, an estimate of the initial volume of in situ soil above the specified cleanup guidelines was calculated on the basis of discrete soil sample data and other relevant data using indicator geostatistical techniques combined with Bayesian analysis. During the remedial action, updated volume estimates of remaining in situ soils requiring excavation were calculated on a periodic basis. In addition to taking into account the volume of soil that had been excavated, the updated volume estimates incorporated both new gamma walkover surveys and discrete sample data collected as part of the remedial action. A civil survey company provided periodic estimates of actual in situ excavated soil volumes. By using the results from the civil survey of actual in situ volumes excavated and the updated estimate of the remaining volume of contaminated soil requiring excavation, the USACE Buffalo District was able to forecast and update project costs and schedule. The soil volume

  10. Ranking as parameter estimation

    Czech Academy of Sciences Publication Activity Database

    Kárný, Miroslav; Guy, Tatiana Valentine

    2009-01-01

    Roč. 4, č. 2 (2009), s. 142-158 ISSN 1745-7645 R&D Projects: GA MŠk 2C06001; GA AV ČR 1ET100750401; GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : ranking * Bayesian estimation * negotiation * modelling Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2009/AS/karny- ranking as parameter estimation.pdf

  11. Transient analysis of intercalation electrodes for parameter estimation

    Science.gov (United States)

    Devan, Sheba

    An essential part of integrating batteries as power sources in any application, be it a large scale automotive application or a small scale portable application, is an efficient Battery Management System (BMS). The combination of a battery with the microprocessor based BMS (called "smart battery") helps prolong the life of the battery by operating in the optimal regime and provides accurate information regarding the battery to the end user. The main purposes of BMS are cell protection, monitoring and control, and communication between different components. These purposes are fulfilled by tracking the change in the parameters of the intercalation electrodes in the batteries. Consequently, the functions of the BMS should be prompt, which requires the methodology of extracting the parameters to be efficient in time. The traditional transient techniques applied so far may not be suitable due to reasons such as the inability to apply these techniques when the battery is under operation, long experimental time, etc. The primary aim of this research work is to design a fast, accurate and reliable technique that can be used to extract parameter values of the intercalation electrodes. A methodology based on analysis of the short time response to a sinusoidal input perturbation, in the time domain is demonstrated using a porous electrode model for an intercalation electrode. It is shown that the parameters associated with the interfacial processes occurring in the electrode can be determined rapidly, within a few milliseconds, by measuring the response in the transient region. The short time analysis in the time domain is then extended to a single particle model that involves bulk diffusion in the solid phase in addition to interfacial processes. A systematic procedure for sequential parameter estimation using sensitivity analysis is described. Further, the short time response and the input perturbation are transformed into the frequency domain using Fast Fourier Transform

  12. Estimation of kinematic parameters in CALIFA galaxies: no-assumption on internal dynamics

    Science.gov (United States)

    García-Lorenzo, B.; Barrera-Ballesteros, J.; CALIFA Team

    2016-06-01

    We propose a simple approach to homogeneously estimate kinematic parameters of a broad variety of galaxies (elliptical, spirals, irregulars or interacting systems). This methodology avoids the use of any kinematical model or any assumption on internal dynamics. This simple but novel approach allows us to determine: the frequency of kinematic distortions, systemic velocity, kinematic center, and kinematic position angles which are directly measured from the two dimensional-distributions of radial velocities. We test our analysis tools using the CALIFA Survey

  13. Cosmological parameter estimation using Particle Swarm Optimization

    Science.gov (United States)

    Prasad, J.; Souradeep, T.

    2014-03-01

    Constraining parameters of a theoretical model from observational data is an important exercise in cosmology. There are many theoretically motivated models, which demand greater number of cosmological parameters than the standard model of cosmology uses, and make the problem of parameter estimation challenging. It is a common practice to employ Bayesian formalism for parameter estimation for which, in general, likelihood surface is probed. For the standard cosmological model with six parameters, likelihood surface is quite smooth and does not have local maxima, and sampling based methods like Markov Chain Monte Carlo (MCMC) method are quite successful. However, when there are a large number of parameters or the likelihood surface is not smooth, other methods may be more effective. In this paper, we have demonstrated application of another method inspired from artificial intelligence, called Particle Swarm Optimization (PSO) for estimating cosmological parameters from Cosmic Microwave Background (CMB) data taken from the WMAP satellite.

  14. Cosmological parameter estimation using Particle Swarm Optimization

    International Nuclear Information System (INIS)

    Prasad, J; Souradeep, T

    2014-01-01

    Constraining parameters of a theoretical model from observational data is an important exercise in cosmology. There are many theoretically motivated models, which demand greater number of cosmological parameters than the standard model of cosmology uses, and make the problem of parameter estimation challenging. It is a common practice to employ Bayesian formalism for parameter estimation for which, in general, likelihood surface is probed. For the standard cosmological model with six parameters, likelihood surface is quite smooth and does not have local maxima, and sampling based methods like Markov Chain Monte Carlo (MCMC) method are quite successful. However, when there are a large number of parameters or the likelihood surface is not smooth, other methods may be more effective. In this paper, we have demonstrated application of another method inspired from artificial intelligence, called Particle Swarm Optimization (PSO) for estimating cosmological parameters from Cosmic Microwave Background (CMB) data taken from the WMAP satellite

  15. Statistics of Parameter Estimates: A Concrete Example

    KAUST Repository

    Aguilar, Oscar

    2015-01-01

    © 2015 Society for Industrial and Applied Mathematics. Most mathematical models include parameters that need to be determined from measurements. The estimated values of these parameters and their uncertainties depend on assumptions made about noise levels, models, or prior knowledge. But what can we say about the validity of such estimates, and the influence of these assumptions? This paper is concerned with methods to address these questions, and for didactic purposes it is written in the context of a concrete nonlinear parameter estimation problem. We will use the results of a physical experiment conducted by Allmaras et al. at Texas A&M University [M. Allmaras et al., SIAM Rev., 55 (2013), pp. 149-167] to illustrate the importance of validation procedures for statistical parameter estimation. We describe statistical methods and data analysis tools to check the choices of likelihood and prior distributions, and provide examples of how to compare Bayesian results with those obtained by non-Bayesian methods based on different types of assumptions. We explain how different statistical methods can be used in complementary ways to improve the understanding of parameter estimates and their uncertainties.

  16. Methodology for completing Hanford 200 Area tank waste physical/chemical profile estimations

    International Nuclear Information System (INIS)

    Kruger, A.A.

    1996-01-01

    The purpose of the Methodology for Completing Hanford 200 Area Tank Waste Physical/Chemical Profile Estimations is to capture the logic inherent to completing 200 Area waste tank physical and chemical profile estimates. Since there has been good correlation between the estimate profiles and actual conditions during sampling and sub-segment analysis, it is worthwhile to document the current estimate methodology

  17. On the Methodology to Calculate the Covariance of Estimated Resonance Parameters

    International Nuclear Information System (INIS)

    Becker, B.; Kopecky, S.; Schillebeeckx, P.

    2015-01-01

    Principles to determine resonance parameters and their covariance from experimental data are discussed. Different methods to propagate the covariance of experimental parameters are compared. A full Bayesian statistical analysis reveals that the level to which the initial uncertainty of the experimental parameters propagates, strongly depends on the experimental conditions. For high precision data the initial uncertainties of experimental parameters, like a normalization factor, has almost no impact on the covariance of the parameters in case of thick sample measurements and conventional uncertainty propagation or full Bayesian analysis. The covariances derived from a full Bayesian analysis and least-squares fit are derived under the condition that the model describing the experimental observables is perfect. When the quality of the model can not be verified a more conservative method based on a renormalization of the covariance matrix is recommended to propagate fully the uncertainty of experimental systematic effects. Finally, neutron resonance transmission analysis is proposed as an accurate method to validate evaluated data libraries in the resolved resonance region

  18. An optimal estimation algorithm to derive Ice and Ocean parameters from AMSR Microwave radiometer observations

    DEFF Research Database (Denmark)

    Pedersen, Leif Toudal; Tonboe, Rasmus T.; Høyer, Jacob

    channels as well as the combination of data from multiple sources such as microwave radiometry, scatterometry and numerical weather prediction. Optimal estimation is data assimilation without a numerical model for retrieving physical parameters from remote sensing using a multitude of available information......Global multispectral microwave radiometer measurements have been available for several decades. However, most current sea ice concentration algorithms still only takes advantage of a very limited subset of the available channels. Here we present a method that allows utilization of all available....... The methodology is observation driven and model innovation is limited to the translation between observation space and physical parameter space Over open water we use a semi-empirical radiative transfer model developed by Meissner & Wentz that estimates the multispectral AMSR brightness temperatures, i...

  19. Methodology for estimating biomass energy potential and its application to Colombia

    International Nuclear Information System (INIS)

    Gonzalez-Salazar, Miguel Angel; Morini, Mirko; Pinelli, Michele; Spina, Pier Ruggero; Venturini, Mauro; Finkenrath, Matthias; Poganietz, Witold-Roger

    2014-01-01

    Highlights: • Methodology to estimate the biomass energy potential and its uncertainty at a country level. • Harmonization of approaches and assumptions in existing assessment studies. • The theoretical and technical biomass energy potential in Colombia are estimated in 2010. - Abstract: This paper presents a methodology to estimate the biomass energy potential and its associated uncertainty at a country level when quality and availability of data are limited. The current biomass energy potential in Colombia is assessed following the proposed methodology and results are compared to existing assessment studies. The proposed methodology is a bottom-up resource-focused approach with statistical analysis that uses a Monte Carlo algorithm to stochastically estimate the theoretical and the technical biomass energy potential. The paper also includes a proposed approach to quantify uncertainty combining a probabilistic propagation of uncertainty, a sensitivity analysis and a set of disaggregated sub-models to estimate reliability of predictions and reduce the associated uncertainty. Results predict a theoretical energy potential of 0.744 EJ and a technical potential of 0.059 EJ in 2010, which might account for 1.2% of the annual primary energy production (4.93 EJ)

  20. State Estimation-based Transmission line parameter identification

    Directory of Open Access Journals (Sweden)

    Fredy Andrés Olarte Dussán

    2010-01-01

    Full Text Available This article presents two state-estimation-based algorithms for identifying transmission line parameters. The identification technique used simultaneous state-parameter estimation on an artificial power system composed of several copies of the same transmission line, using measurements at different points in time. The first algorithm used active and reactive power measurements at both ends of the line. The second method used synchronised phasor voltage and current measurements at both ends. The algorithms were tested in simulated conditions on the 30-node IEEE test system. All line parameters for this system were estimated with errors below 1%.

  1. Kinetic parameter estimation from attenuated SPECT projection measurements

    International Nuclear Information System (INIS)

    Reutter, B.W.; Gullberg, G.T.

    1998-01-01

    Conventional analysis of dynamically acquired nuclear medicine data involves fitting kinetic models to time-activity curves generated from regions of interest defined on a temporal sequence of reconstructed images. However, images reconstructed from the inconsistent projections of a time-varying distribution of radiopharmaceutical acquired by a rotating SPECT system can contain artifacts that lead to biases in the estimated kinetic parameters. To overcome this problem the authors investigated the estimation of kinetic parameters directly from projection data by modeling the data acquisition process. To accomplish this it was necessary to parametrize the spatial and temporal distribution of the radiopharmaceutical within the SPECT field of view. In a simulated transverse slice, kinetic parameters were estimated for simple one compartment models for three myocardial regions of interest, as well as for the liver. Myocardial uptake and washout parameters estimated by conventional analysis of noiseless simulated data had biases ranging between 1--63%. Parameters estimated directly from the noiseless projection data were unbiased as expected, since the model used for fitting was faithful to the simulation. Predicted uncertainties (standard deviations) of the parameters obtained for 500,000 detected events ranged between 2--31% for the myocardial uptake parameters and 2--23% for the myocardial washout parameters

  2. Robust Parameter and Signal Estimation in Induction Motors

    DEFF Research Database (Denmark)

    Børsting, H.

    This thesis deals with theories and methods for robust parameter and signal estimation in induction motors. The project originates in industrial interests concerning sensor-less control of electrical drives. During the work, some general problems concerning estimation of signals and parameters...... in nonlinear systems, have been exposed. The main objectives of this project are: - analysis and application of theories and methods for robust estimation of parameters in a model structure, obtained from knowledge of the physics of the induction motor. - analysis and application of theories and methods...... for robust estimation of the rotor speed and driving torque of the induction motor based only on measurements of stator voltages and currents. Only contimuous-time models have been used, which means that physical related signals and parameters are estimated directly and not indirectly by some discrete...

  3. Standard Errors of Estimated Latent Variable Scores with Estimated Structural Parameters

    Science.gov (United States)

    Hoshino, Takahiro; Shigemasu, Kazuo

    2008-01-01

    The authors propose a concise formula to evaluate the standard error of the estimated latent variable score when the true values of the structural parameters are not known and must be estimated. The formula can be applied to factor scores in factor analysis or ability parameters in item response theory, without bootstrap or Markov chain Monte…

  4. Modeling MEA with the CPA equation of state: A parameter estimation study adding local search to PSO algorithm

    DEFF Research Database (Denmark)

    Santos, Letícia Cotia dos; Tavares, Frederico Wanderley; Ahón, Victor Rolando Ruiz

    2015-01-01

    parameters for MEA. This work proposes adding LLE information systematically in the CPA parameter estimation procedure. At first, the parameter search space is defined by the results from the PSO sensitivity analysis for VLE considering the experimental error for vapor pressures and liquid densities...... (objective function cut off). Then, two possible methodologies are discussed: the first one uses all the possible parameter sets and check them against the LLE and VLE experimental data. The second method explicitly incorporates LLE information into the objective function and uses both PSO and PSO...

  5. Methodology for Estimating Ingestion Dose for Emergency Response at SRS

    CERN Document Server

    Simpkins, A A

    2002-01-01

    At the Savannah River Site (SRS), emergency response models estimate dose for inhalation and ground shine pathways. A methodology has been developed to incorporate ingestion doses into the emergency response models. The methodology follows a two-phase approach. The first phase estimates site-specific derived response levels (DRLs) which can be compared with predicted ground-level concentrations to determine if intervention is needed to protect the public. This phase uses accepted methods with little deviation from recommended guidance. The second phase uses site-specific data to estimate a 'best estimate' dose to offsite individuals from ingestion of foodstuffs. While this method deviates from recommended guidance, it is technically defensibly and more realistic. As guidance is updated, these methods also will need to be updated.

  6. Estimation of fracture parameters using elastic full-waveform inversion

    KAUST Repository

    Zhang, Zhendong

    2017-08-17

    Current methodologies to characterize fractures at the reservoir scale have serious limitations in spatial resolution and suffer from uncertainties in the inverted parameters. Here, we propose to estimate the spatial distribution and physical properties of fractures using full-waveform inversion (FWI) of multicomponent surface seismic data. An effective orthorhombic medium with five clusters of vertical fractures distributed in a checkboard fashion is used to test the algorithm. A shape regularization term is added to the objective function to improve the estimation of the fracture azimuth, which is otherwise poorly constrained. The cracks are assumed to be penny-shaped to reduce the nonuniqueness in the inverted fracture weaknesses and achieve a faster convergence. To better understand the inversion results, we analyze the radiation patterns induced by the perturbations in the fracture weaknesses and orientation. Due to the high-resolution potential of elastic FWI, the developed algorithm can recover the spatial fracture distribution and identify localized “sweet spots” of intense fracturing. However, the fracture azimuth can be resolved only using long-offset data.

  7. Detecting changes in ultrasound backscattered statistics by using Nakagami parameters: Comparisons of moment-based and maximum likelihood estimators.

    Science.gov (United States)

    Lin, Jen-Jen; Cheng, Jung-Yu; Huang, Li-Fei; Lin, Ying-Hsiu; Wan, Yung-Liang; Tsui, Po-Hsiang

    2017-05-01

    The Nakagami distribution is an approximation useful to the statistics of ultrasound backscattered signals for tissue characterization. Various estimators may affect the Nakagami parameter in the detection of changes in backscattered statistics. In particular, the moment-based estimator (MBE) and maximum likelihood estimator (MLE) are two primary methods used to estimate the Nakagami parameters of ultrasound signals. This study explored the effects of the MBE and different MLE approximations on Nakagami parameter estimations. Ultrasound backscattered signals of different scatterer number densities were generated using a simulation model, and phantom experiments and measurements of human liver tissues were also conducted to acquire real backscattered echoes. Envelope signals were employed to estimate the Nakagami parameters by using the MBE, first- and second-order approximations of MLE (MLE 1 and MLE 2 , respectively), and Greenwood approximation (MLE gw ) for comparisons. The simulation results demonstrated that, compared with the MBE and MLE 1 , the MLE 2 and MLE gw enabled more stable parameter estimations with small sample sizes. Notably, the required data length of the envelope signal was 3.6 times the pulse length. The phantom and tissue measurement results also showed that the Nakagami parameters estimated using the MLE 2 and MLE gw could simultaneously differentiate various scatterer concentrations with lower standard deviations and reliably reflect physical meanings associated with the backscattered statistics. Therefore, the MLE 2 and MLE gw are suggested as estimators for the development of Nakagami-based methodologies for ultrasound tissue characterization. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Nonlinear parameter estimation in inviscid compressible flows in presence of uncertainties

    International Nuclear Information System (INIS)

    Jemcov, A.; Mathur, S.

    2004-01-01

    The focus of this paper is on the formulation and solution of inverse problems of parameter estimation using algorithmic differentiation. The inverse problem formulated here seeks to determine the input parameters that minimize a least squares functional with respect to certain target data. The formulation allows for uncertainty in the target data by considering the least squares functional in a stochastic basis described by the covariance of the target data. Furthermore, to allow for robust design, the formulation also accounts for uncertainties in the input parameters. This is achieved using the method of propagation of uncertainties using the directional derivatives of the output parameters with respect to unknown parameters. The required derivatives are calculated simultaneously with the solution using generic programming exploiting the template and operator overloading features of the C++ language. The methodology described here is general and applicable to any numerical solution procedure for any set of governing equations but for the purpose of this paper we consider a finite volume solution of the compressible Euler equations. In particular, we illustrate the method for the case of supersonic flow in a duct with a wedge. The parameter to be determined is the inlet Mach number and the target data is the axial component of velocity at the exit of the duct. (author)

  9. Parameter estimation and inverse problems

    CERN Document Server

    Aster, Richard C; Thurber, Clifford H

    2005-01-01

    Parameter Estimation and Inverse Problems primarily serves as a textbook for advanced undergraduate and introductory graduate courses. Class notes have been developed and reside on the World Wide Web for faciliting use and feedback by teaching colleagues. The authors'' treatment promotes an understanding of fundamental and practical issus associated with parameter fitting and inverse problems including basic theory of inverse problems, statistical issues, computational issues, and an understanding of how to analyze the success and limitations of solutions to these probles. The text is also a practical resource for general students and professional researchers, where techniques and concepts can be readily picked up on a chapter-by-chapter basis.Parameter Estimation and Inverse Problems is structured around a course at New Mexico Tech and is designed to be accessible to typical graduate students in the physical sciences who may not have an extensive mathematical background. It is accompanied by a Web site that...

  10. Pollen parameters estimates of genetic variability among newly ...

    African Journals Online (AJOL)

    Pollen parameters estimates of genetic variability among newly selected Nigerian roselle (Hibiscus sabdariffa L.) genotypes. ... Estimates of some pollen parameters where used to assess the genetic diversity among ... HOW TO USE AJOL.

  11. A Comparative Study of Distribution System Parameter Estimation Methods

    Energy Technology Data Exchange (ETDEWEB)

    Sun, Yannan; Williams, Tess L.; Gourisetti, Sri Nikhil Gup

    2016-07-17

    In this paper, we compare two parameter estimation methods for distribution systems: residual sensitivity analysis and state-vector augmentation with a Kalman filter. These two methods were originally proposed for transmission systems, and are still the most commonly used methods for parameter estimation. Distribution systems have much lower measurement redundancy than transmission systems. Therefore, estimating parameters is much more difficult. To increase the robustness of parameter estimation, the two methods are applied with combined measurement snapshots (measurement sets taken at different points in time), so that the redundancy for computing the parameter values is increased. The advantages and disadvantages of both methods are discussed. The results of this paper show that state-vector augmentation is a better approach for parameter estimation in distribution systems. Simulation studies are done on a modified version of IEEE 13-Node Test Feeder with varying levels of measurement noise and non-zero error in the other system model parameters.

  12. Multi-Parameter Estimation for Orthorhombic Media

    KAUST Repository

    Masmoudi, Nabil

    2015-08-19

    Building reliable anisotropy models is crucial in seismic modeling, imaging and full waveform inversion. However, estimating anisotropy parameters is often hampered by the trade off between inhomogeneity and anisotropy. For instance, one way to estimate the anisotropy parameters is to relate them analytically to traveltimes, which is challenging in inhomogeneous media. Using perturbation theory, we develop travel-time approximations for orthorhombic media as explicit functions of the anellipticity parameters η1, η2 and a parameter Δγ in inhomogeneous background media. Specifically, our expansion assumes inhomogeneous ellipsoidal anisotropic background model, which can be obtained from well information and stacking velocity analysis. This approach has two main advantages: in one hand, it provides a computationally efficient tool to solve the orthorhombic eikonal equation, on the other hand, it provides a mechanism to scan for the best fitting anisotropy parameters without the need for repetitive modeling of traveltimes, because the coefficients of the traveltime expansion are independent of the perturbed parameters. Furthermore, the coefficients of the traveltime expansion provide insights on the sensitivity of the traveltime with respect to the perturbed parameters. We show the accuracy of the traveltime approximations as well as an approach for multi-parameter scanning in orthorhombic media.

  13. Multi-Parameter Estimation for Orthorhombic Media

    KAUST Repository

    Masmoudi, Nabil; Alkhalifah, Tariq Ali

    2015-01-01

    Building reliable anisotropy models is crucial in seismic modeling, imaging and full waveform inversion. However, estimating anisotropy parameters is often hampered by the trade off between inhomogeneity and anisotropy. For instance, one way to estimate the anisotropy parameters is to relate them analytically to traveltimes, which is challenging in inhomogeneous media. Using perturbation theory, we develop travel-time approximations for orthorhombic media as explicit functions of the anellipticity parameters η1, η2 and a parameter Δγ in inhomogeneous background media. Specifically, our expansion assumes inhomogeneous ellipsoidal anisotropic background model, which can be obtained from well information and stacking velocity analysis. This approach has two main advantages: in one hand, it provides a computationally efficient tool to solve the orthorhombic eikonal equation, on the other hand, it provides a mechanism to scan for the best fitting anisotropy parameters without the need for repetitive modeling of traveltimes, because the coefficients of the traveltime expansion are independent of the perturbed parameters. Furthermore, the coefficients of the traveltime expansion provide insights on the sensitivity of the traveltime with respect to the perturbed parameters. We show the accuracy of the traveltime approximations as well as an approach for multi-parameter scanning in orthorhombic media.

  14. Bayesian inference in genetic parameter estimation of visual scores in Nellore beef-cattle

    Science.gov (United States)

    2009-01-01

    The aim of this study was to estimate the components of variance and genetic parameters for the visual scores which constitute the Morphological Evaluation System (MES), such as body structure (S), precocity (P) and musculature (M) in Nellore beef-cattle at the weaning and yearling stages, by using threshold Bayesian models. The information used for this was gleaned from visual scores of 5,407 animals evaluated at the weaning and 2,649 at the yearling stages. The genetic parameters for visual score traits were estimated through two-trait analysis, using the threshold animal model, with Bayesian statistics methodology and MTGSAM (Multiple Trait Gibbs Sampler for Animal Models) threshold software. Heritability estimates for S, P and M were 0.68, 0.65 and 0.62 (at weaning) and 0.44, 0.38 and 0.32 (at the yearling stage), respectively. Heritability estimates for S, P and M were found to be high, and so it is expected that these traits should respond favorably to direct selection. The visual scores evaluated at the weaning and yearling stages might be used in the composition of new selection indexes, as they presented sufficient genetic variability to promote genetic progress in such morphological traits. PMID:21637450

  15. Parameter and State Estimator for State Space Models

    Directory of Open Access Journals (Sweden)

    Ruifeng Ding

    2014-01-01

    Full Text Available This paper proposes a parameter and state estimator for canonical state space systems from measured input-output data. The key is to solve the system state from the state equation and to substitute it into the output equation, eliminating the state variables, and the resulting equation contains only the system inputs and outputs, and to derive a least squares parameter identification algorithm. Furthermore, the system states are computed from the estimated parameters and the input-output data. Convergence analysis using the martingale convergence theorem indicates that the parameter estimates converge to their true values. Finally, an illustrative example is provided to show that the proposed algorithm is effective.

  16. Parameter Estimation in Stochastic Grey-Box Models

    DEFF Research Database (Denmark)

    Kristensen, Niels Rode; Madsen, Henrik; Jørgensen, Sten Bay

    2004-01-01

    An efficient and flexible parameter estimation scheme for grey-box models in the sense of discretely, partially observed Ito stochastic differential equations with measurement noise is presented along with a corresponding software implementation. The estimation scheme is based on the extended...... Kalman filter and features maximum likelihood as well as maximum a posteriori estimation on multiple independent data sets, including irregularly sampled data sets and data sets with occasional outliers and missing observations. The software implementation is compared to an existing software tool...... and proves to have better performance both in terms of quality of estimates for nonlinear systems with significant diffusion and in terms of reproducibility. In particular, the new tool provides more accurate and more consistent estimates of the parameters of the diffusion term....

  17. Kinetic parameter estimation from SPECT cone-beam projection measurements

    International Nuclear Information System (INIS)

    Huesman, Ronald H.; Reutter, Bryan W.; Zeng, G. Larry; Gullberg, Grant T.

    1998-01-01

    Kinetic parameters are commonly estimated from dynamically acquired nuclear medicine data by first reconstructing a dynamic sequence of images and subsequently fitting the parameters to time-activity curves generated from regions of interest overlaid upon the image sequence. Biased estimates can result from images reconstructed using inconsistent projections of a time-varying distribution of radiopharmaceutical acquired by a rotating SPECT system. If the SPECT data are acquired using cone-beam collimators wherein the gantry rotates so that the focal point of the collimators always remains in a plane, additional biases can arise from images reconstructed using insufficient, as well as truncated, projection samples. To overcome these problems we have investigated the estimation of kinetic parameters directly from SPECT cone-beam projection data by modelling the data acquisition process. To accomplish this it was necessary to parametrize the spatial and temporal distribution of the radiopharmaceutical within the SPECT field of view. In a simulated chest image volume, kinetic parameters were estimated for simple one-compartment models for four myocardial regions of interest. Myocardial uptake and washout parameters estimated by conventional analysis of noiseless simulated cone-beam data had biases ranging between 3-26% and 0-28%, respectively. Parameters estimated directly from the noiseless projection data were unbiased as expected, since the model used for fitting was faithful to the simulation. Statistical uncertainties of parameter estimates for 10 000 000 events ranged between 0.2-9% for the uptake parameters and between 0.3-6% for the washout parameters. (author)

  18. Traveltime approximations and parameter estimation for orthorhombic media

    KAUST Repository

    Masmoudi, Nabil

    2016-05-30

    Building anisotropy models is necessary for seismic modeling and imaging. However, anisotropy estimation is challenging due to the trade-off between inhomogeneity and anisotropy. Luckily, we can estimate the anisotropy parameters Building anisotropy models is necessary for seismic modeling and imaging. However, anisotropy estimation is challenging due to the trade-off between inhomogeneity and anisotropy. Luckily, we can estimate the anisotropy parameters if we relate them analytically to traveltimes. Using perturbation theory, we have developed traveltime approximations for orthorhombic media as explicit functions of the anellipticity parameters η1, η2, and Δχ in inhomogeneous background media. The parameter Δχ is related to Tsvankin-Thomsen notation and ensures easier computation of traveltimes in the background model. Specifically, our expansion assumes an inhomogeneous ellipsoidal anisotropic background model, which can be obtained from well information and stacking velocity analysis. We have used the Shanks transform to enhance the accuracy of the formulas. A homogeneous medium simplification of the traveltime expansion provided a nonhyperbolic moveout description of the traveltime that was more accurate than other derived approximations. Moreover, the formulation provides a computationally efficient tool to solve the eikonal equation of an orthorhombic medium, without any constraints on the background model complexity. Although, the expansion is based on the factorized representation of the perturbation parameters, smooth variations of these parameters (represented as effective values) provides reasonable results. Thus, this formulation provides a mechanism to estimate the three effective parameters η1, η2, and Δχ. We have derived Dix-type formulas for orthorhombic medium to convert the effective parameters to their interval values.

  19. Parameter Estimation of Nonlinear Models in Forestry.

    OpenAIRE

    Fekedulegn, Desta; Mac Siúrtáin, Máirtín Pádraig; Colbert, Jim J.

    1999-01-01

    Partial derivatives of the negative exponential, monomolecular, Mitcherlich, Gompertz, logistic, Chapman-Richards, von Bertalanffy, Weibull and the Richard’s nonlinear growth models are presented. The application of these partial derivatives in estimating the model parameters is illustrated. The parameters are estimated using the Marquardt iterative method of nonlinear regression relating top height to age of Norway spruce (Picea abies L.) from the Bowmont Norway Spruce Thinnin...

  20. Nonlinear Parameter Estimation in Microbiological Degradation Systems and Statistic Test for Common Estimation

    DEFF Research Database (Denmark)

    Sommer, Helle Mølgaard; Holst, Helle; Spliid, Henrik

    1995-01-01

    Three identical microbiological experiments were carried out and analysed in order to examine the variability of the parameter estimates. The microbiological system consisted of a substrate (toluene) and a biomass (pure culture) mixed together in an aquifer medium. The degradation of the substrate...... and the growth of the biomass are described by the Monod model consisting of two nonlinear coupled first-order differential equations. The objective of this study was to estimate the kinetic parameters in the Monod model and to test whether the parameters from the three identical experiments have the same values....... Estimation of the parameters was obtained using an iterative maximum likelihood method and the test used was an approximative likelihood ratio test. The test showed that the three sets of parameters were identical only on a 4% alpha level....

  1. Parameter estimation in X-ray astronomy

    International Nuclear Information System (INIS)

    Lampton, M.; Margon, B.; Bowyer, S.

    1976-01-01

    The problems of model classification and parameter estimation are examined, with the objective of establishing the statistical reliability of inferences drawn from X-ray observations. For testing the validities of classes of models, the procedure based on minimizing the chi 2 statistic is recommended; it provides a rejection criterion at any desired significance level. Once a class of models has been accepted, a related procedure based on the increase of chi 2 gives a confidence region for the values of the model's adjustable parameters. The procedure allows the confidence level to be chosen exactly, even for highly nonlinear models. Numerical experiments confirm the validity of the prescribed technique.The chi 2 /sub min/+1 error estimation method is evaluated and found unsuitable when several parameter ranges are to be derived, because it substantially underestimates their joint errors. The ratio of variances method, while formally correct, gives parameter confidence regions which are more variable than necessary

  2. A Novel Nonlinear Parameter Estimation Method of Soft Tissues

    Directory of Open Access Journals (Sweden)

    Qianqian Tong

    2017-12-01

    Full Text Available The elastic parameters of soft tissues are important for medical diagnosis and virtual surgery simulation. In this study, we propose a novel nonlinear parameter estimation method for soft tissues. Firstly, an in-house data acquisition platform was used to obtain external forces and their corresponding deformation values. To provide highly precise data for estimating nonlinear parameters, the measured forces were corrected using the constructed weighted combination forecasting model based on a support vector machine (WCFM_SVM. Secondly, a tetrahedral finite element parameter estimation model was established to describe the physical characteristics of soft tissues, using the substitution parameters of Young’s modulus and Poisson’s ratio to avoid solving complicated nonlinear problems. To improve the robustness of our model and avoid poor local minima, the initial parameters solved by a linear finite element model were introduced into the parameter estimation model. Finally, a self-adapting Levenberg–Marquardt (LM algorithm was presented, which is capable of adaptively adjusting iterative parameters to solve the established parameter estimation model. The maximum absolute error of our WCFM_SVM model was less than 0.03 Newton, resulting in more accurate forces in comparison with other correction models tested. The maximum absolute error between the calculated and measured nodal displacements was less than 1.5 mm, demonstrating that our nonlinear parameters are precise.

  3. Estimates for the parameters of the heavy quark expansion

    Energy Technology Data Exchange (ETDEWEB)

    Heinonen, Johannes; Mannel, Thomas [Universitaet Siegen (Germany)

    2015-07-01

    We give improved estimates for the non-perturbative parameters appearing in the heavy quark expansion for inclusive decays. While the parameters appearing in low orders of this expansion can be extracted from data, the number of parameters in higher orders proliferates strongly, making a determination of these parameters from data impossible. Thus, one has to rely on theoretical estimates which may be obtained from an insertion of intermediate states. We refine this method and attempt to estimate the uncertainties of this approach.

  4. On robust parameter estimation in brain-computer interfacing

    Science.gov (United States)

    Samek, Wojciech; Nakajima, Shinichi; Kawanabe, Motoaki; Müller, Klaus-Robert

    2017-12-01

    Objective. The reliable estimation of parameters such as mean or covariance matrix from noisy and high-dimensional observations is a prerequisite for successful application of signal processing and machine learning algorithms in brain-computer interfacing (BCI). This challenging task becomes significantly more difficult if the data set contains outliers, e.g. due to subject movements, eye blinks or loose electrodes, as they may heavily bias the estimation and the subsequent statistical analysis. Although various robust estimators have been developed to tackle the outlier problem, they ignore important structural information in the data and thus may not be optimal. Typical structural elements in BCI data are the trials consisting of a few hundred EEG samples and indicating the start and end of a task. Approach. This work discusses the parameter estimation problem in BCI and introduces a novel hierarchical view on robustness which naturally comprises different types of outlierness occurring in structured data. Furthermore, the class of minimum divergence estimators is reviewed and a robust mean and covariance estimator for structured data is derived and evaluated with simulations and on a benchmark data set. Main results. The results show that state-of-the-art BCI algorithms benefit from robustly estimated parameters. Significance. Since parameter estimation is an integral part of various machine learning algorithms, the presented techniques are applicable to many problems beyond BCI.

  5. Methodology for estimating sodium aerosol concentrations during breeder reactor fires

    International Nuclear Information System (INIS)

    Fields, D.E.; Miller, C.W.

    1985-01-01

    We have devised and applied a methodology for estimating the concentration of aerosols released at building surfaces and monitored at other building surface points. We have used this methodology to make calculations that suggest, for one air-cooled breeder reactor design, cooling will not be compromised by severe liquid-metal fires

  6. Uncertainty in a monthly water balance model using the generalized likelihood uncertainty estimation methodology

    Science.gov (United States)

    Rivera, Diego; Rivas, Yessica; Godoy, Alex

    2015-02-01

    Hydrological models are simplified representations of natural processes and subject to errors. Uncertainty bounds are a commonly used way to assess the impact of an input or model architecture uncertainty in model outputs. Different sets of parameters could have equally robust goodness-of-fit indicators, which is known as Equifinality. We assessed the outputs from a lumped conceptual hydrological model to an agricultural watershed in central Chile under strong interannual variability (coefficient of variability of 25%) by using the Equifinality concept and uncertainty bounds. The simulation period ran from January 1999 to December 2006. Equifinality and uncertainty bounds from GLUE methodology (Generalized Likelihood Uncertainty Estimation) were used to identify parameter sets as potential representations of the system. The aim of this paper is to exploit the use of uncertainty bounds to differentiate behavioural parameter sets in a simple hydrological model. Then, we analyze the presence of equifinality in order to improve the identification of relevant hydrological processes. The water balance model for Chillan River exhibits, at a first stage, equifinality. However, it was possible to narrow the range for the parameters and eventually identify a set of parameters representing the behaviour of the watershed (a behavioural model) in agreement with observational and soft data (calculation of areal precipitation over the watershed using an isohyetal map). The mean width of the uncertainty bound around the predicted runoff for the simulation period decreased from 50 to 20 m3s-1 after fixing the parameter controlling the areal precipitation over the watershed. This decrement is equivalent to decreasing the ratio between simulated and observed discharge from 5.2 to 2.5. Despite the criticisms against the GLUE methodology, such as the lack of statistical formality, it is identified as a useful tool assisting the modeller with the identification of critical parameters.

  7. A simulation of water pollution model parameter estimation

    Science.gov (United States)

    Kibler, J. F.

    1976-01-01

    A parameter estimation procedure for a water pollution transport model is elaborated. A two-dimensional instantaneous-release shear-diffusion model serves as representative of a simple transport process. Pollution concentration levels are arrived at via modeling of a remote-sensing system. The remote-sensed data are simulated by adding Gaussian noise to the concentration level values generated via the transport model. Model parameters are estimated from the simulated data using a least-squares batch processor. Resolution, sensor array size, and number and location of sensor readings can be found from the accuracies of the parameter estimates.

  8. Analyzing parameters optimisation in minimising warpage on side arm using response surface methodology (RSM)

    Science.gov (United States)

    Rayhana, N.; Fathullah, M.; Shayfull, Z.; Nasir, S. M.; Hazwan, M. H. M.

    2017-09-01

    This paper presents a systematic methodology to analyse the warpage of the side arm part using Autodesk Moldflow Insight software. Response Surface Methodology (RSM) was proposed to optimise the processing parameters that will result in optimal solutions by efficiently minimising the warpage of the side arm part. The variable parameters considered in this study was based on most significant parameters affecting warpage stated by previous researchers, that is melt temperature, mould temperature and packing pressure while adding packing time and cooling time as these is the commonly used parameters by researchers. The results show that warpage was improved by 10.15% and the most significant parameters affecting warpage are packing pressure.

  9. Estimation of Poisson-Dirichlet Parameters with Monotone Missing Data

    Directory of Open Access Journals (Sweden)

    Xueqin Zhou

    2017-01-01

    Full Text Available This article considers the estimation of the unknown numerical parameters and the density of the base measure in a Poisson-Dirichlet process prior with grouped monotone missing data. The numerical parameters are estimated by the method of maximum likelihood estimates and the density function is estimated by kernel method. A set of simulations was conducted, which shows that the estimates perform well.

  10. Integrated cost estimation methodology to support high-performance building design

    Energy Technology Data Exchange (ETDEWEB)

    Vaidya, Prasad; Greden, Lara; Eijadi, David; McDougall, Tom [The Weidt Group, Minnetonka (United States); Cole, Ray [Axiom Engineers, Monterey (United States)

    2007-07-01

    Design teams evaluating the performance of energy conservation measures (ECMs) calculate energy savings rigorously with established modelling protocols, accounting for the interaction between various measures. However, incremental cost calculations do not have a similar rigor. Often there is no recognition of cost reductions with integrated design, nor is there assessment of cost interactions amongst measures. This lack of rigor feeds the notion that high-performance buildings cost more, creating a barrier for design teams pursuing aggressive high-performance outcomes. This study proposes an alternative integrated methodology to arrive at a lower perceived incremental cost for improved energy performance. The methodology is based on the use of energy simulations as means towards integrated design and cost estimation. Various points along the spectrum of integration are identified and characterized by the amount of design effort invested, the scheduling of effort, and relative energy performance of the resultant design. It includes a study of the interactions between building system parameters as they relate to capital costs. Several cost interactions amongst energy measures are found to be significant.The value of this approach is demonstrated with alternatives in a case study that shows the differences between perceived costs for energy measures along various points on the integration spectrum. These alternatives show design tradeoffs and identify how decisions would have been different with a standard costing approach. Areas of further research to make the methodology more robust are identified. Policy measures to encourage the integrated approach and reduce the barriers towards improved energy performance are discussed.

  11. Application of precursor methodology in initiating frequency estimates

    International Nuclear Information System (INIS)

    Kohut, P.; Fitzpatrick, R.G.

    1991-01-01

    The precursor methodology developed in recent years provides a consistent technique to identify important accident sequence precursors. It relies on operational events (extracting information from actual experience) and infers core damage scenarios based on expected safety system responses. The ranking or categorization of each precursor is determined by considering the full spectrum of potential core damage sequences. The methodology estimates the frequency of severe core damage based on the approach suggested by Apostolakis and Mosleh, which may lead to a potential overestimation of the severe-accident sequence frequency due to the inherent dependencies between the safety systems and the initiating events. The methodology is an encompassing attempt to incorporate most of the operating information available from nuclear power plants and is an attractive tool from the point of view of risk management. In this paper, a further extension of this methodology is discussed with regard to the treatment of initiating frequency of the accident sequences

  12. Deterministic sensitivity and uncertainty methodology for best estimate system codes applied in nuclear technology

    International Nuclear Information System (INIS)

    Petruzzi, A.; D'Auria, F.; Cacuci, D.G.

    2009-01-01

    Nuclear Power Plant (NPP) technology has been developed based on the traditional defense in depth philosophy supported by deterministic and overly conservative methods for safety analysis. In the 1970s [1], conservative hypotheses were introduced for safety analyses to address existing uncertainties. Since then, intensive thermal-hydraulic experimental research has resulted in a considerable increase in knowledge and consequently in the development of best-estimate codes able to provide more realistic information about the physical behaviour and to identify the most relevant safety issues allowing the evaluation of the existing actual margins between the results of the calculations and the acceptance criteria. However, the best-estimate calculation results from complex thermal-hydraulic system codes (like Relap5, Cathare, Athlet, Trace, etc..) are affected by unavoidable approximations that are un-predictable without the use of computational tools that account for the various sources of uncertainty. Therefore the use of best-estimate codes (BE) within the reactor technology, either for design or safety purposes, implies understanding and accepting the limitations and the deficiencies of those codes. Taking into consideration the above framework, a comprehensive approach for utilizing quantified uncertainties arising from Integral Test Facilities (ITFs, [2]) and Separate Effect Test Facilities (SETFs, [3]) in the process of calibrating complex computer models for the application to NPP transient scenarios has been developed. The methodology proposed is capable of accommodating multiple SETFs and ITFs to learn as much as possible about uncertain parameters, allowing for the improvement of the computer model predictions based on the available experimental evidences. The proposed methodology constitutes a major step forward with respect to the generally used expert judgment and statistical methods as it permits a) to establish the uncertainties of any parameter

  13. Parameter estimation in stochastic differential equations

    CERN Document Server

    Bishwal, Jaya P N

    2008-01-01

    Parameter estimation in stochastic differential equations and stochastic partial differential equations is the science, art and technology of modelling complex phenomena and making beautiful decisions. The subject has attracted researchers from several areas of mathematics and other related fields like economics and finance. This volume presents the estimation of the unknown parameters in the corresponding continuous models based on continuous and discrete observations and examines extensively maximum likelihood, minimum contrast and Bayesian methods. Useful because of the current availability of high frequency data is the study of refined asymptotic properties of several estimators when the observation time length is large and the observation time interval is small. Also space time white noise driven models, useful for spatial data, and more sophisticated non-Markovian and non-semimartingale models like fractional diffusions that model the long memory phenomena are examined in this volume.

  14. How to fool cosmic microwave background parameter estimation

    International Nuclear Information System (INIS)

    Kinney, William H.

    2001-01-01

    With the release of the data from the Boomerang and MAXIMA-1 balloon flights, estimates of cosmological parameters based on the cosmic microwave background (CMB) have reached unprecedented precision. In this paper I show that it is possible for these estimates to be substantially biased by features in the primordial density power spectrum. I construct primordial power spectra which mimic to within cosmic variance errors the effect of changing parameters such as the baryon density and neutrino mass, meaning that even an ideal measurement would be unable to resolve the degeneracy. Complementary measurements are necessary to resolve this ambiguity in parameter estimation efforts based on CMB temperature fluctuations alone

  15. Summary of the DREAM8 Parameter Estimation Challenge: Toward Parameter Identification for Whole-Cell Models.

    Directory of Open Access Journals (Sweden)

    Jonathan R Karr

    2015-05-01

    Full Text Available Whole-cell models that explicitly represent all cellular components at the molecular level have the potential to predict phenotype from genotype. However, even for simple bacteria, whole-cell models will contain thousands of parameters, many of which are poorly characterized or unknown. New algorithms are needed to estimate these parameters and enable researchers to build increasingly comprehensive models. We organized the Dialogue for Reverse Engineering Assessments and Methods (DREAM 8 Whole-Cell Parameter Estimation Challenge to develop new parameter estimation algorithms for whole-cell models. We asked participants to identify a subset of parameters of a whole-cell model given the model's structure and in silico "experimental" data. Here we describe the challenge, the best performing methods, and new insights into the identifiability of whole-cell models. We also describe several valuable lessons we learned toward improving future challenges. Going forward, we believe that collaborative efforts supported by inexpensive cloud computing have the potential to solve whole-cell model parameter estimation.

  16. A Life-Cycle Cost Estimating Methodology for NASA-Developed Air Traffic Control Decision Support Tools

    Science.gov (United States)

    Wang, Jianzhong Jay; Datta, Koushik; Landis, Michael R. (Technical Monitor)

    2002-01-01

    This paper describes the development of a life-cycle cost (LCC) estimating methodology for air traffic control Decision Support Tools (DSTs) under development by the National Aeronautics and Space Administration (NASA), using a combination of parametric, analogy, and expert opinion methods. There is no one standard methodology and technique that is used by NASA or by the Federal Aviation Administration (FAA) for LCC estimation of prospective Decision Support Tools. Some of the frequently used methodologies include bottom-up, analogy, top-down, parametric, expert judgement, and Parkinson's Law. The developed LCC estimating methodology can be visualized as a three-dimensional matrix where the three axes represent coverage, estimation, and timing. This paper focuses on the three characteristics of this methodology that correspond to the three axes.

  17. Spillover effects in epidemiology: parameters, study designs and methodological considerations

    Science.gov (United States)

    Benjamin-Chung, Jade; Arnold, Benjamin F; Berger, David; Luby, Stephen P; Miguel, Edward; Colford Jr, John M; Hubbard, Alan E

    2018-01-01

    Abstract Many public health interventions provide benefits that extend beyond their direct recipients and impact people in close physical or social proximity who did not directly receive the intervention themselves. A classic example of this phenomenon is the herd protection provided by many vaccines. If these ‘spillover effects’ (i.e. ‘herd effects’) are present in the same direction as the effects on the intended recipients, studies that only estimate direct effects on recipients will likely underestimate the full public health benefits of the intervention. Causal inference assumptions for spillover parameters have been articulated in the vaccine literature, but many studies measuring spillovers of other types of public health interventions have not drawn upon that literature. In conjunction with a systematic review we conducted of spillovers of public health interventions delivered in low- and middle-income countries, we classified the most widely used spillover parameters reported in the empirical literature into a standard notation. General classes of spillover parameters include: cluster-level spillovers; spillovers conditional on treatment or outcome density, distance or the number of treated social network links; and vaccine efficacy parameters related to spillovers. We draw on high quality empirical examples to illustrate each of these parameters. We describe study designs to estimate spillovers and assumptions required to make causal inferences about spillovers. We aim to advance and encourage methods for spillover estimation and reporting by standardizing spillover parameter nomenclature and articulating the causal inference assumptions required to estimate spillovers. PMID:29106568

  18. Estimation of Key Parameters of the Coupled Energy and Water Model by Assimilating Land Surface Data

    Science.gov (United States)

    Abdolghafoorian, A.; Farhadi, L.

    2017-12-01

    Accurate estimation of land surface heat and moisture fluxes, as well as root zone soil moisture, is crucial in various hydrological, meteorological, and agricultural applications. Field measurements of these fluxes are costly and cannot be readily scaled to large areas relevant to weather and climate studies. Therefore, there is a need for techniques to make quantitative estimates of heat and moisture fluxes using land surface state observations that are widely available from remote sensing across a range of scale. In this work, we applies the variational data assimilation approach to estimate land surface fluxes and soil moisture profile from the implicit information contained Land Surface Temperature (LST) and Soil Moisture (SM) (hereafter the VDA model). The VDA model is focused on the estimation of three key parameters: 1- neutral bulk heat transfer coefficient (CHN), 2- evaporative fraction from soil and canopy (EF), and 3- saturated hydraulic conductivity (Ksat). CHN and EF regulate the partitioning of available energy between sensible and latent heat fluxes. Ksat is one of the main parameters used in determining infiltration, runoff, groundwater recharge, and in simulating hydrological processes. In this study, a system of coupled parsimonious energy and water model will constrain the estimation of three unknown parameters in the VDA model. The profile of SM (LST) at multiple depths is estimated using moisture diffusion (heat diffusion) equation. In this study, the uncertainties of retrieved unknown parameters and fluxes are estimated from the inverse of Hesian matrix of cost function which is computed using the Lagrangian methodology. Analysis of uncertainty provides valuable information about the accuracy of estimated parameters and their correlation and guide the formulation of a well-posed estimation problem. The results of proposed algorithm are validated with a series of experiments using a synthetic data set generated by the simultaneous heat and

  19. Parameter Estimation for Improving Association Indicators in Binary Logistic Regression

    Directory of Open Access Journals (Sweden)

    Mahdi Bashiri

    2012-02-01

    Full Text Available The aim of this paper is estimation of Binary logistic regression parameters for maximizing the log-likelihood function with improved association indicators. In this paper the parameter estimation steps have been explained and then measures of association have been introduced and their calculations have been analyzed. Moreover a new related indicators based on membership degree level have been expressed. Indeed association measures demonstrate the number of success responses occurred in front of failure in certain number of Bernoulli independent experiments. In parameter estimation, existing indicators values is not sensitive to the parameter values, whereas the proposed indicators are sensitive to the estimated parameters during the iterative procedure. Therefore, proposing a new association indicator of binary logistic regression with more sensitivity to the estimated parameters in maximizing the log- likelihood in iterative procedure is innovation of this study.

  20. Application of best estimate and uncertainty safety analysis methodology to loss of flow events at Ontario's Power Generation's Darlington Nuclear Generating Station

    International Nuclear Information System (INIS)

    Huget, R.G.; Lau, D.K.; Luxat, J.C.

    2001-01-01

    Ontario Power Generation (OPG) is currently developing a new safety analysis methodology based on best estimate and uncertainty (BEAU) analysis. The framework and elements of the new safety analysis methodology are defined. The evolution of safety analysis technology at OPG has been thoroughly documented. Over the years, the use of conservative limiting assumptions in OPG safety analyses has led to gradual erosion of predicted safety margins. The main purpose of the new methodology is to provide a more realistic quantification of safety margins within a probabilistic framework, using best estimate results, with an integrated accounting of the underlying uncertainties. Another objective of the new methodology is to provide a cost-effective means for on-going safety analysis support of OPG's nuclear generating stations. Discovery issues and plant aging effects require that the safety analyses be periodically revised and, in the past, the cost of reanalysis at OPG has been significant. As OPG enters the new competitive marketplace for electricity, there is a strong need to conduct safety analysis in a less cumbersome manner. This paper presents the results of the first licensing application of the new methodology in support of planned design modifications to the shutdown systems (SDSs) at Darlington Nuclear Generating Station (NGS). The design modifications restore dual trip parameter coverage over the full range of reactor power for certain postulated loss-of-flow (LOF) events. The application of BEAU analysis to the single heat transport pump trip event provides a realistic estimation of the safety margins for the primary and backup trip parameters. These margins are significantly larger than those predicted by conventional limit of the operating envelope (LOE) analysis techniques. (author)

  1. Estimation of light transport parameters in biological media using ...

    Indian Academy of Sciences (India)

    Estimation of light transport parameters in biological media using coherent backscattering ... backscattered light for estimating the light transport parameters of biological media has been investigated. ... Pramana – Journal of Physics | News.

  2. Statistics of Parameter Estimates: A Concrete Example

    KAUST Repository

    Aguilar, Oscar; Allmaras, Moritz; Bangerth, Wolfgang; Tenorio, Luis

    2015-01-01

    © 2015 Society for Industrial and Applied Mathematics. Most mathematical models include parameters that need to be determined from measurements. The estimated values of these parameters and their uncertainties depend on assumptions made about noise

  3. Parameter Estimation of Partial Differential Equation Models

    KAUST Repository

    Xun, Xiaolei; Cao, Jiguo; Mallick, Bani; Maity, Arnab; Carroll, Raymond J.

    2013-01-01

    PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus

  4. Expanded uncertainty estimation methodology in determining the sandy soils filtration coefficient

    Science.gov (United States)

    Rusanova, A. D.; Malaja, L. D.; Ivanov, R. N.; Gruzin, A. V.; Shalaj, V. V.

    2018-04-01

    The combined standard uncertainty estimation methodology in determining the sandy soils filtration coefficient has been developed. The laboratory researches were carried out which resulted in filtration coefficient determination and combined uncertainty estimation obtaining.

  5. A less field-intensive robust design for estimating demographic parameters with Mark-resight data

    Science.gov (United States)

    McClintock, B.T.; White, Gary C.

    2009-01-01

    The robust design has become popular among animal ecologists as a means for estimating population abundance and related demographic parameters with mark-recapture data. However, two drawbacks of traditional mark-recapture are financial cost and repeated disturbance to animals. Mark-resight methodology may in many circumstances be a less expensive and less invasive alternative to mark-recapture, but the models developed to date for these data have overwhelmingly concentrated only on the estimation of abundance. Here we introduce a mark-resight model analogous to that used in mark-recapture for the simultaneous estimation of abundance, apparent survival, and transition probabilities between observable and unobservable states. The model may be implemented using standard statistical computing software, but it has also been incorporated into the freeware package Program MARK. We illustrate the use of our model with mainland New Zealand Robin (Petroica australis) data collected to ascertain whether this methodology may be a reliable alternative for monitoring endangered populations of a closely related species inhabiting the Chatham Islands. We found this method to be a viable alternative to traditional mark-recapture when cost or disturbance to species is of particular concern in long-term population monitoring programs. ?? 2009 by the Ecological Society of America.

  6. Integrated Methodology for Estimating Water Use in Mediterranean Agricultural Areas

    Directory of Open Access Journals (Sweden)

    George C. Zalidis

    2009-08-01

    Full Text Available Agricultural use is by far the largest consumer of fresh water worldwide, especially in the Mediterranean, where it has reached unsustainable levels, thus posing a serious threat to water resources. Having a good estimate of the water used in an agricultural area would help water managers create incentives for water savings at the farmer and basin level, and meet the demands of the European Water Framework Directive. This work presents an integrated methodology for estimating water use in Mediterranean agricultural areas. It is based on well established methods of estimating the actual evapotranspiration through surface energy fluxes, customized for better performance under the Mediterranean conditions: small parcel sizes, detailed crop pattern, and lack of necessary data. The methodology has been tested and validated on the agricultural plain of the river Strimonas (Greece using a time series of Terra MODIS and Landsat 5 TM satellite images, and used to produce a seasonal water use map at a high spatial resolution. Finally, a tool has been designed to implement the methodology with a user-friendly interface, in order to facilitate its operational use.

  7. METHODOLOGY FOR DETERMINING OPTIMAL EXPOSURE PARAMETERS OF A HYPERSPECTRAL SCANNING SENSOR

    Directory of Open Access Journals (Sweden)

    P. Walczykowski

    2016-06-01

    Full Text Available The purpose of the presented research was to establish a methodology that would allow the registration of hyperspectral images with a defined spatial resolution on a horizontal plane. The results obtained within this research could then be used to establish the optimum sensor and flight parameters for collecting aerial imagery data using an UAV or other aerial system. The methodology is based on an user-selected optimal camera exposure parameters (i.e. time, gain value and flight parameters (i.e. altitude, velocity. A push-broom hyperspectral imager- the Headwall MicroHyperspec A-series VNIR was used to conduct this research. The measurement station consisted of the following equipment: a hyperspectral camera MicroHyperspec A-series VNIR, a personal computer with HyperSpec III software, a slider system which guaranteed the stable motion of the sensor system, a white reference panel and a Siemens star, which was used to evaluate the spatial resolution. Hyperspectral images were recorded at different distances between the sensor and the target- from 5m to 100m. During the registration process of each acquired image, many exposure parameters were changed, such as: the aperture value, exposure time and speed of the camera’s movement on the slider. Based on all of the registered hyperspectral images, some dependencies between chosen parameters had been developed: - the Ground Sampling Distance – GSD and the distance between the sensor and the target, - the speed of the camera and the distance between the sensor and the target, - the exposure time and the gain value, - the Density Number and the gain value. The developed methodology allowed us to determine the speed and the altitude of an unmanned aerial vehicle on which the sensor would be mounted, ensuring that the registered hyperspectral images have the required spatial resolution.

  8. Kalman filter data assimilation: targeting observations and parameter estimation.

    Science.gov (United States)

    Bellsky, Thomas; Kostelich, Eric J; Mahalov, Alex

    2014-06-01

    This paper studies the effect of targeted observations on state and parameter estimates determined with Kalman filter data assimilation (DA) techniques. We first provide an analytical result demonstrating that targeting observations within the Kalman filter for a linear model can significantly reduce state estimation error as opposed to fixed or randomly located observations. We next conduct observing system simulation experiments for a chaotic model of meteorological interest, where we demonstrate that the local ensemble transform Kalman filter (LETKF) with targeted observations based on largest ensemble variance is skillful in providing more accurate state estimates than the LETKF with randomly located observations. Additionally, we find that a hybrid ensemble Kalman filter parameter estimation method accurately updates model parameters within the targeted observation context to further improve state estimation.

  9. Kalman filter data assimilation: Targeting observations and parameter estimation

    International Nuclear Information System (INIS)

    Bellsky, Thomas; Kostelich, Eric J.; Mahalov, Alex

    2014-01-01

    This paper studies the effect of targeted observations on state and parameter estimates determined with Kalman filter data assimilation (DA) techniques. We first provide an analytical result demonstrating that targeting observations within the Kalman filter for a linear model can significantly reduce state estimation error as opposed to fixed or randomly located observations. We next conduct observing system simulation experiments for a chaotic model of meteorological interest, where we demonstrate that the local ensemble transform Kalman filter (LETKF) with targeted observations based on largest ensemble variance is skillful in providing more accurate state estimates than the LETKF with randomly located observations. Additionally, we find that a hybrid ensemble Kalman filter parameter estimation method accurately updates model parameters within the targeted observation context to further improve state estimation

  10. Modeling and Parameter Estimation of a Small Wind Generation System

    Directory of Open Access Journals (Sweden)

    Carlos A. Ramírez Gómez

    2013-11-01

    Full Text Available The modeling and parameter estimation of a small wind generation system is presented in this paper. The system consists of a wind turbine, a permanent magnet synchronous generator, a three phase rectifier, and a direct current load. In order to estimate the parameters wind speed data are registered in a weather station located in the Fraternidad Campus at ITM. Wind speed data were applied to a reference model programed with PSIM software. From that simulation, variables were registered to estimate the parameters. The wind generation system model together with the estimated parameters is an excellent representation of the detailed model, but the estimated model offers a higher flexibility than the programed model in PSIM software.

  11. Evaluating the effects of dam breach methodologies on Consequence Estimation through Sensitivity Analysis

    Science.gov (United States)

    Kalyanapu, A. J.; Thames, B. A.

    2013-12-01

    consequence assessment for the solution to the problem statement. For the four breach methodologies, a sensitivity analysis of four breach parameters, breach side slope (SS), breach width (Wb), breach invert elevation (Elb), and time of failure (tf), is conducted. Up to, 68 simulations are computed to produce breach hydrographs in HEC-RAS for input into Flood2D-GPU. The Flood2D-GPU simulation results were then post-processed in HEC-FIA to evaluate: Total Population at Risk (PAR), 14-yr and Under PAR (PAR14-), 65-yr and Over PAR (PAR65+), Loss of Life (LOL) and Direct Economic Impact (DEI). The MLM approach resulted in wide variability in simulated minimum and maximum values of PAR, PAR 65+ and LOL estimates. For PAR14- and DEI, Froehlich (1995) resulted in lower values while MLM resulted in higher estimates. This preliminary study demonstrated the relative performance of four commonly used dam breach methodologies and their impacts on consequence estimation.

  12. Estimation of parameter sensitivities for stochastic reaction networks

    KAUST Repository

    Gupta, Ankit

    2016-01-07

    Quantification of the effects of parameter uncertainty is an important and challenging problem in Systems Biology. We consider this problem in the context of stochastic models of biochemical reaction networks where the dynamics is described as a continuous-time Markov chain whose states represent the molecular counts of various species. For such models, effects of parameter uncertainty are often quantified by estimating the infinitesimal sensitivities of some observables with respect to model parameters. The aim of this talk is to present a holistic approach towards this problem of estimating parameter sensitivities for stochastic reaction networks. Our approach is based on a generic formula which allows us to construct efficient estimators for parameter sensitivity using simulations of the underlying model. We will discuss how novel simulation techniques, such as tau-leaping approximations, multi-level methods etc. can be easily integrated with our approach and how one can deal with stiff reaction networks where reactions span multiple time-scales. We will demonstrate the efficiency and applicability of our approach using many examples from the biological literature.

  13. Estimation of parameter uncertainty for an activated sludge model using Bayesian inference: a comparison with the frequentist method.

    Science.gov (United States)

    Zonta, Zivko J; Flotats, Xavier; Magrí, Albert

    2014-08-01

    The procedure commonly used for the assessment of the parameters included in activated sludge models (ASMs) relies on the estimation of their optimal value within a confidence region (i.e. frequentist inference). Once optimal values are estimated, parameter uncertainty is computed through the covariance matrix. However, alternative approaches based on the consideration of the model parameters as probability distributions (i.e. Bayesian inference), may be of interest. The aim of this work is to apply (and compare) both Bayesian and frequentist inference methods when assessing uncertainty for an ASM-type model, which considers intracellular storage and biomass growth, simultaneously. Practical identifiability was addressed exclusively considering respirometric profiles based on the oxygen uptake rate and with the aid of probabilistic global sensitivity analysis. Parameter uncertainty was thus estimated according to both the Bayesian and frequentist inferential procedures. Results were compared in order to evidence the strengths and weaknesses of both approaches. Since it was demonstrated that Bayesian inference could be reduced to a frequentist approach under particular hypotheses, the former can be considered as a more generalist methodology. Hence, the use of Bayesian inference is encouraged for tackling inferential issues in ASM environments.

  14. Load Estimation from Modal Parameters

    DEFF Research Database (Denmark)

    Aenlle, Manuel López; Brincker, Rune; Fernández, Pelayo Fernández

    2007-01-01

    In Natural Input Modal Analysis the modal parameters are estimated just from the responses while the loading is not recorded. However, engineers are sometimes interested in knowing some features of the loading acting on a structure. In this paper, a procedure to determine the loading from a FRF m...

  15. A variational approach to parameter estimation in ordinary differential equations

    Directory of Open Access Journals (Sweden)

    Kaschek Daniel

    2012-08-01

    Full Text Available Abstract Background Ordinary differential equations are widely-used in the field of systems biology and chemical engineering to model chemical reaction networks. Numerous techniques have been developed to estimate parameters like rate constants, initial conditions or steady state concentrations from time-resolved data. In contrast to this countable set of parameters, the estimation of entire courses of network components corresponds to an innumerable set of parameters. Results The approach presented in this work is able to deal with course estimation for extrinsic system inputs or intrinsic reactants, both not being constrained by the reaction network itself. Our method is based on variational calculus which is carried out analytically to derive an augmented system of differential equations including the unconstrained components as ordinary state variables. Finally, conventional parameter estimation is applied to the augmented system resulting in a combined estimation of courses and parameters. Conclusions The combined estimation approach takes the uncertainty in input courses correctly into account. This leads to precise parameter estimates and correct confidence intervals. In particular this implies that small motifs of large reaction networks can be analysed independently of the rest. By the use of variational methods, elements from control theory and statistics are combined allowing for future transfer of methods between the two fields.

  16. A variational approach to parameter estimation in ordinary differential equations.

    Science.gov (United States)

    Kaschek, Daniel; Timmer, Jens

    2012-08-14

    Ordinary differential equations are widely-used in the field of systems biology and chemical engineering to model chemical reaction networks. Numerous techniques have been developed to estimate parameters like rate constants, initial conditions or steady state concentrations from time-resolved data. In contrast to this countable set of parameters, the estimation of entire courses of network components corresponds to an innumerable set of parameters. The approach presented in this work is able to deal with course estimation for extrinsic system inputs or intrinsic reactants, both not being constrained by the reaction network itself. Our method is based on variational calculus which is carried out analytically to derive an augmented system of differential equations including the unconstrained components as ordinary state variables. Finally, conventional parameter estimation is applied to the augmented system resulting in a combined estimation of courses and parameters. The combined estimation approach takes the uncertainty in input courses correctly into account. This leads to precise parameter estimates and correct confidence intervals. In particular this implies that small motifs of large reaction networks can be analysed independently of the rest. By the use of variational methods, elements from control theory and statistics are combined allowing for future transfer of methods between the two fields.

  17. Parameter estimation in stochastic rainfall-runoff models

    DEFF Research Database (Denmark)

    Jonsdottir, Harpa; Madsen, Henrik; Palsson, Olafur Petur

    2006-01-01

    A parameter estimation method for stochastic rainfall-runoff models is presented. The model considered in the paper is a conceptual stochastic model, formulated in continuous-discrete state space form. The model is small and a fully automatic optimization is, therefore, possible for estimating all...... the parameter values are optimal for simulation or prediction. The data originates from Iceland and the model is designed for Icelandic conditions, including a snow routine for mountainous areas. The model demands only two input data series, precipitation and temperature and one output data series...

  18. Methodology for uranium resource estimates and reliability

    International Nuclear Information System (INIS)

    Blanchfield, D.M.

    1980-01-01

    The NURE uranium assessment method has evolved from a small group of geologists estimating resources on a few lease blocks, to a national survey involving an interdisciplinary system consisting of the following: (1) geology and geologic analogs; (2) engineering and cost modeling; (3) mathematics and probability theory, psychology and elicitation of subjective judgments; and (4) computerized calculations, computer graphics, and data base management. The evolution has been spurred primarily by two objectives; (1) quantification of uncertainty, and (2) elimination of simplifying assumptions. This has resulted in a tremendous data-gathering effort and the involvement of hundreds of technical experts, many in uranium geology, but many from other fields as well. The rationality of the methods is still largely based on the concept of an analog and the observation that the results are reasonable. The reliability, or repeatability, of the assessments is reasonably guaranteed by the series of peer and superior technical reviews which has been formalized under the current methodology. The optimism or pessimism of individual geologists who make the initial assessments is tempered by the review process, resulting in a series of assessments which are a consistent, unbiased reflection of the facts. Despite the many improvements over past methods, several objectives for future development remain, primarily to reduce subjectively in utilizing factual information in the estimation of endowment, and to improve the recognition of cost uncertainties in the assessment of economic potential. The 1980 NURE assessment methodology will undoubtly be improved, but the reader is reminded that resource estimates are and always will be a forecast for the future

  19. Assessment of compliance with regulatory requirements for a best estimate methodology for evaluation of ECCS

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Un Chul; Jang, Jin Wook; Lim, Ho Gon; Jeong, Ik [Seoul National Univ., Seoul (Korea, Republic of); Sim, Suk Ku [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    2000-03-15

    Best estimate methodology for evaluation of ECCS proposed by KEPCO(KREM) os using thermal-hydraulic best-estimate code and the topical report for the methodology is described that it meets the regulatory requirement of USNRC regulatory guide. In this research the assessment of compliance with regulatory guide. In this research the assessment of compliance with regulatory requirements for the methodology is performed. The state of licensing procedure of other countries and best-estimate evaluation methodologies of Europe is also investigated, The applicability of models and propriety of procedure of uncertainty analysis of KREM are appraised and compliance with USNRC regulatory guide is assessed.

  20. Approximate effect of parameter pseudonoise intensity on rate of convergence for EKF parameter estimators. [Extended Kalman Filter

    Science.gov (United States)

    Hill, Bryon K.; Walker, Bruce K.

    1991-01-01

    When using parameter estimation methods based on extended Kalman filter (EKF) theory, it is common practice to assume that the unknown parameter values behave like a random process, such as a random walk, in order to guarantee their identifiability by the filter. The present work is the result of an ongoing effort to quantitatively describe the effect that the assumption of a fictitious noise (called pseudonoise) driving the unknown parameter values has on the parameter estimate convergence rate in filter-based parameter estimators. The initial approach is to examine a first-order system described by one state variable with one parameter to be estimated. The intent is to derive analytical results for this simple system that might offer insight into the effect of the pseudonoise assumption for more complex systems. Such results would make it possible to predict the estimator error convergence behavior as a function of the assumed pseudonoise intensity, and this leads to the natural application of the results to the design of filter-based parameter estimators. The results obtained show that the analytical description of the convergence behavior is very difficult.

  1. Neglect Of Parameter Estimation Uncertainty Can Significantly Overestimate Structural Reliability

    Directory of Open Access Journals (Sweden)

    Rózsás Árpád

    2015-12-01

    Full Text Available Parameter estimation uncertainty is often neglected in reliability studies, i.e. point estimates of distribution parameters are used for representative fractiles, and in probabilistic models. A numerical example examines the effect of this uncertainty on structural reliability using Bayesian statistics. The study reveals that the neglect of parameter estimation uncertainty might lead to an order of magnitude underestimation of failure probability.

  2. Parameter estimation of variable-parameter nonlinear Muskingum model using excel solver

    Science.gov (United States)

    Kang, Ling; Zhou, Liwei

    2018-02-01

    Abstract . The Muskingum model is an effective flood routing technology in hydrology and water resources Engineering. With the development of optimization technology, more and more variable-parameter Muskingum models were presented to improve effectiveness of the Muskingum model in recent decades. A variable-parameter nonlinear Muskingum model (NVPNLMM) was proposed in this paper. According to the results of two real and frequently-used case studies by various models, the NVPNLMM could obtain better values of evaluation criteria, which are used to describe the superiority of the estimated outflows and compare the accuracies of flood routing using various models, and the optimal estimated outflows by the NVPNLMM were closer to the observed outflows than the ones by other models.

  3. Inflation and cosmological parameter estimation

    Energy Technology Data Exchange (ETDEWEB)

    Hamann, J.

    2007-05-15

    In this work, we focus on two aspects of cosmological data analysis: inference of parameter values and the search for new effects in the inflationary sector. Constraints on cosmological parameters are commonly derived under the assumption of a minimal model. We point out that this procedure systematically underestimates errors and possibly biases estimates, due to overly restrictive assumptions. In a more conservative approach, we analyse cosmological data using a more general eleven-parameter model. We find that regions of the parameter space that were previously thought ruled out are still compatible with the data; the bounds on individual parameters are relaxed by up to a factor of two, compared to the results for the minimal six-parameter model. Moreover, we analyse a class of inflation models, in which the slow roll conditions are briefly violated, due to a step in the potential. We show that the presence of a step generically leads to an oscillating spectrum and perform a fit to CMB and galaxy clustering data. We do not find conclusive evidence for a step in the potential and derive strong bounds on quantities that parameterise the step. (orig.)

  4. Exploratory Study for Continuous-time Parameter Estimation of Ankle Dynamics

    Science.gov (United States)

    Kukreja, Sunil L.; Boyle, Richard D.

    2014-01-01

    Recently, a parallel pathway model to describe ankle dynamics was proposed. This model provides a relationship between ankle angle and net ankle torque as the sum of a linear and nonlinear contribution. A technique to identify parameters of this model in discrete-time has been developed. However, these parameters are a nonlinear combination of the continuous-time physiology, making insight into the underlying physiology impossible. The stable and accurate estimation of continuous-time parameters is critical for accurate disease modeling, clinical diagnosis, robotic control strategies, development of optimal exercise protocols for longterm space exploration, sports medicine, etc. This paper explores the development of a system identification technique to estimate the continuous-time parameters of ankle dynamics. The effectiveness of this approach is assessed via simulation of a continuous-time model of ankle dynamics with typical parameters found in clinical studies. The results show that although this technique improves estimates, it does not provide robust estimates of continuous-time parameters of ankle dynamics. Due to this we conclude that alternative modeling strategies and more advanced estimation techniques be considered for future work.

  5. A parameter tree approach to estimating system sensitivities to parameter sets

    International Nuclear Information System (INIS)

    Jarzemba, M.S.; Sagar, B.

    2000-01-01

    A post-processing technique for determining relative system sensitivity to groups of parameters and system components is presented. It is assumed that an appropriate parametric model is used to simulate system behavior using Monte Carlo techniques and that a set of realizations of system output(s) is available. The objective of our technique is to analyze the input vectors and the corresponding output vectors (that is, post-process the results) to estimate the relative sensitivity of the output to input parameters (taken singly and as a group) and thereby rank them. This technique is different from the design of experimental techniques in that a partitioning of the parameter space is not required before the simulation. A tree structure (which looks similar to an event tree) is developed to better explain the technique. Each limb of the tree represents a particular combination of parameters or a combination of system components. For convenience and to distinguish it from the event tree, we call it the parameter tree. To construct the parameter tree, the samples of input parameter values are treated as either a '+' or a '-' based on whether or not the sampled parameter value is greater than or less than a specified branching criterion (e.g., mean, median, percentile of the population). The corresponding system outputs are also segregated into similar bins. Partitioning the first parameter into a '+' or a '-' bin creates the first level of the tree containing two branches. At the next level, realizations associated with each first-level branch are further partitioned into two bins using the branching criteria on the second parameter and so on until the tree is fully populated. Relative sensitivities are then inferred from the number of samples associated with each branch of the tree. The parameter tree approach is illustrated by applying it to a number of preliminary simulations of the proposed high-level radioactive waste repository at Yucca Mountain, NV. Using a

  6. Online State Space Model Parameter Estimation in Synchronous Machines

    Directory of Open Access Journals (Sweden)

    Z. Gallehdari

    2014-06-01

    The suggested approach is evaluated for a sample synchronous machine model. Estimated parameters are tested for different inputs at different operating conditions. The effect of noise is also considered in this study. Simulation results show that the proposed approach provides good accuracy for parameter estimation.

  7. Unifying parameter estimation and the Deutsch-Jozsa algorithm for continuous variables

    International Nuclear Information System (INIS)

    Zwierz, Marcin; Perez-Delgado, Carlos A.; Kok, Pieter

    2010-01-01

    We reveal a close relationship between quantum metrology and the Deutsch-Jozsa algorithm on continuous-variable quantum systems. We develop a general procedure, characterized by two parameters, that unifies parameter estimation and the Deutsch-Jozsa algorithm. Depending on which parameter we keep constant, the procedure implements either the parameter-estimation protocol or the Deutsch-Jozsa algorithm. The parameter-estimation part of the procedure attains the Heisenberg limit and is therefore optimal. Due to the use of approximate normalizable continuous-variable eigenstates, the Deutsch-Jozsa algorithm is probabilistic. The procedure estimates a value of an unknown parameter and solves the Deutsch-Jozsa problem without the use of any entanglement.

  8. Impacts of Different Types of Measurements on Estimating Unsaturatedflow Parameters

    Science.gov (United States)

    Shi, L.

    2015-12-01

    This study evaluates the value of different types of measurements for estimating soil hydraulic parameters. A numerical method based on ensemble Kalman filter (EnKF) is presented to solely or jointly assimilate point-scale soil water head data, point-scale soil water content data, surface soil water content data and groundwater level data. This study investigates the performance of EnKF under different types of data, the potential worth contained in these data, and the factors that may affect estimation accuracy. Results show that for all types of data, smaller measurements errors lead to faster convergence to the true values. Higher accuracy measurements are required to improve the parameter estimation if a large number of unknown parameters need to be identified simultaneously. The data worth implied by the surface soil water content data and groundwater level data is prone to corruption by a deviated initial guess. Surface soil moisture data are capable of identifying soil hydraulic parameters for the top layers, but exert less or no influence on deeper layers especially when estimating multiple parameters simultaneously. Groundwater level is one type of valuable information to infer the soil hydraulic parameters. However, based on the approach used in this study, the estimates from groundwater level data may suffer severe degradation if a large number of parameters must be identified. Combined use of two or more types of data is helpful to improve the parameter estimation.

  9. Accelerated maximum likelihood parameter estimation for stochastic biochemical systems

    Directory of Open Access Journals (Sweden)

    Daigle Bernie J

    2012-05-01

    Full Text Available Abstract Background A prerequisite for the mechanistic simulation of a biochemical system is detailed knowledge of its kinetic parameters. Despite recent experimental advances, the estimation of unknown parameter values from observed data is still a bottleneck for obtaining accurate simulation results. Many methods exist for parameter estimation in deterministic biochemical systems; methods for discrete stochastic systems are less well developed. Given the probabilistic nature of stochastic biochemical models, a natural approach is to choose parameter values that maximize the probability of the observed data with respect to the unknown parameters, a.k.a. the maximum likelihood parameter estimates (MLEs. MLE computation for all but the simplest models requires the simulation of many system trajectories that are consistent with experimental data. For models with unknown parameters, this presents a computational challenge, as the generation of consistent trajectories can be an extremely rare occurrence. Results We have developed Monte Carlo Expectation-Maximization with Modified Cross-Entropy Method (MCEM2: an accelerated method for calculating MLEs that combines advances in rare event simulation with a computationally efficient version of the Monte Carlo expectation-maximization (MCEM algorithm. Our method requires no prior knowledge regarding parameter values, and it automatically provides a multivariate parameter uncertainty estimate. We applied the method to five stochastic systems of increasing complexity, progressing from an analytically tractable pure-birth model to a computationally demanding model of yeast-polarization. Our results demonstrate that MCEM2 substantially accelerates MLE computation on all tested models when compared to a stand-alone version of MCEM. Additionally, we show how our method identifies parameter values for certain classes of models more accurately than two recently proposed computationally efficient methods

  10. Study on uncertainty evaluation methodology related to hydrological parameter of regional groundwater flow analysis model

    International Nuclear Information System (INIS)

    Sakai, Ryutaro; Munakata, Masahiro; Ohoka, Masao; Kameya, Hiroshi

    2009-11-01

    In the safety assessment for a geological disposal of radioactive waste, it is important to develop a methodology for long-term estimation of regional groundwater flow from data acquisition to numerical analyses. In the uncertainties associated with estimation of regional groundwater flow, there are the one that concerns parameters and the one that concerns the hydrologeological evolution. The uncertainties of parameters include measurement errors and their heterogeneity. The authors discussed the uncertainties of hydraulic conductivity as a significant parameter for regional groundwater flow analysis. This study suggests that hydraulic conductivities of rock mass are controlled by rock characteristics such as fractures, porosity and test conditions such as hydraulic gradient, water quality, water temperature and that there exists variations more than ten times in hydraulic conductivity by difference due to test conditions such as hydraulic gradient or due to rock type variations such as rock fractures, porosity. In addition this study demonstrated that confining pressure change caused by uplift and subsidence and change of hydraulic gradient under the long-term evolution of hydrogeological environment could possibly produce variations more than ten times of magnitude in hydraulic conductivity. It was also shown that the effect of water quality change on hydraulic conductivity was not negligible and that the replacement of fresh water and saline water caused by sea level change could induce 0.6 times in current hydraulic conductivities in case of Horonobe site. (author)

  11. Postprocessing MPEG based on estimated quantization parameters

    DEFF Research Database (Denmark)

    Forchhammer, Søren

    2009-01-01

    the case where the coded stream is not accessible, or from an architectural point of view not desirable to use, and instead estimate some of the MPEG stream parameters based on the decoded sequence. The I-frames are detected and the quantization parameters are estimated from the coded stream and used...... in the postprocessing. We focus on deringing and present a scheme which aims at suppressing ringing artifacts, while maintaining the sharpness of the texture. The goal is to improve the visual quality, so perceptual blur and ringing metrics are used in addition to PSNR evaluation. The performance of the new `pure......' postprocessing compares favorable to a reference postprocessing filter which has access to the quantization parameters not only for I-frames but also on P and B-frames....

  12. Identifyability measures to select the parameters to be estimated in a solid-state fermentation distributed parameter model.

    Science.gov (United States)

    da Silveira, Christian L; Mazutti, Marcio A; Salau, Nina P G

    2016-07-08

    Process modeling can lead to of advantages such as helping in process control, reducing process costs and product quality improvement. This work proposes a solid-state fermentation distributed parameter model composed by seven differential equations with seventeen parameters to represent the process. Also, parameters estimation with a parameters identifyability analysis (PIA) is performed to build an accurate model with optimum parameters. Statistical tests were made to verify the model accuracy with the estimated parameters considering different assumptions. The results have shown that the model assuming substrate inhibition better represents the process. It was also shown that eight from the seventeen original model parameters were nonidentifiable and better results were obtained with the removal of these parameters from the estimation procedure. Therefore, PIA can be useful to estimation procedure, since it may reduce the number of parameters that can be evaluated. Further, PIA improved the model results, showing to be an important procedure to be taken. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 32:905-917, 2016. © 2016 American Institute of Chemical Engineers.

  13. Evaluation of the conservativeness of the methodology for estimating earthquake-induced movements of fractures intersecting canisters

    International Nuclear Information System (INIS)

    La Pointe, Paul R.; Cladouhos, Trenton T.; Outters, Nils; Follin, Sven

    2000-04-01

    This study evaluates the parameter sensitivity and the conservativeness of the methodology outlined in TR 99-03. Sensitivity analysis focuses on understanding how variability in input parameter values impacts the calculated fracture displacements. These studies clarify what parameters play the greatest role in fracture movements, and help define critical values of these parameters in terms of canister failures. The thresholds or intervals of values that lead to a certain level of canister failure calculated in this study could be useful for evaluating future candidate sites. Key parameters include: 1. magnitude/frequency of earthquakes; 2. the distance of the earthquake from the canisters; 3. the size and aspect ratio of fractures intersecting canisters; and 4. the orientation of the fractures. The results of this study show that distance and earthquake magnitude are the most important factors, followed by fracture size. Fracture orientation is much less important. Regression relations were developed to predict induced fracture slip as a function of distance and either earthquake magnitude or slip on the earthquake fault. These regression relations were validated by using them to estimate the number of canister failures due to single damaging earthquakes at Aberg, and comparing these estimates with those presented in TR 99-03. The methodology described in TR 99-03 employs several conservative simplifications in order to devise a numerically feasible method to estimate fracture movements due to earthquakes outside of the repository over the next 100,000 years. These simplifications include: 1. fractures are assumed to be frictionless and cohesionless; 2. all energy transmitted to the fracture by the earthquake is assumed to produce elastic deformation of the fracture; no energy is diverted into fracture propagation; and 3. shielding effects of other fractures between the earthquake and the fracture are neglected. The numerical modeling effectively assumes that the

  14. Evaluation of the conservativeness of the methodology for estimating earthquake-induced movements of fractures intersecting canisters

    Energy Technology Data Exchange (ETDEWEB)

    La Pointe, Paul R.; Cladouhos, Trenton T. [Golder Associates Inc., Las Vegas, NV (United States); Outters, Nils; Follin, Sven [Golder Grundteknik KB, Stockholm (Sweden)

    2000-04-01

    This study evaluates the parameter sensitivity and the conservativeness of the methodology outlined in TR 99-03. Sensitivity analysis focuses on understanding how variability in input parameter values impacts the calculated fracture displacements. These studies clarify what parameters play the greatest role in fracture movements, and help define critical values of these parameters in terms of canister failures. The thresholds or intervals of values that lead to a certain level of canister failure calculated in this study could be useful for evaluating future candidate sites. Key parameters include: 1. magnitude/frequency of earthquakes; 2. the distance of the earthquake from the canisters; 3. the size and aspect ratio of fractures intersecting canisters; and 4. the orientation of the fractures. The results of this study show that distance and earthquake magnitude are the most important factors, followed by fracture size. Fracture orientation is much less important. Regression relations were developed to predict induced fracture slip as a function of distance and either earthquake magnitude or slip on the earthquake fault. These regression relations were validated by using them to estimate the number of canister failures due to single damaging earthquakes at Aberg, and comparing these estimates with those presented in TR 99-03. The methodology described in TR 99-03 employs several conservative simplifications in order to devise a numerically feasible method to estimate fracture movements due to earthquakes outside of the repository over the next 100,000 years. These simplifications include: 1. fractures are assumed to be frictionless and cohesionless; 2. all energy transmitted to the fracture by the earthquake is assumed to produce elastic deformation of the fracture; no energy is diverted into fracture propagation; and 3. shielding effects of other fractures between the earthquake and the fracture are neglected. The numerical modeling effectively assumes that the

  15. Composite likelihood estimation of demographic parameters

    Directory of Open Access Journals (Sweden)

    Garrigan Daniel

    2009-11-01

    Full Text Available Abstract Background Most existing likelihood-based methods for fitting historical demographic models to DNA sequence polymorphism data to do not scale feasibly up to the level of whole-genome data sets. Computational economies can be achieved by incorporating two forms of pseudo-likelihood: composite and approximate likelihood methods. Composite likelihood enables scaling up to large data sets because it takes the product of marginal likelihoods as an estimator of the likelihood of the complete data set. This approach is especially useful when a large number of genomic regions constitutes the data set. Additionally, approximate likelihood methods can reduce the dimensionality of the data by summarizing the information in the original data by either a sufficient statistic, or a set of statistics. Both composite and approximate likelihood methods hold promise for analyzing large data sets or for use in situations where the underlying demographic model is complex and has many parameters. This paper considers a simple demographic model of allopatric divergence between two populations, in which one of the population is hypothesized to have experienced a founder event, or population bottleneck. A large resequencing data set from human populations is summarized by the joint frequency spectrum, which is a matrix of the genomic frequency spectrum of derived base frequencies in two populations. A Bayesian Metropolis-coupled Markov chain Monte Carlo (MCMCMC method for parameter estimation is developed that uses both composite and likelihood methods and is applied to the three different pairwise combinations of the human population resequence data. The accuracy of the method is also tested on data sets sampled from a simulated population model with known parameters. Results The Bayesian MCMCMC method also estimates the ratio of effective population size for the X chromosome versus that of the autosomes. The method is shown to estimate, with reasonable

  16. [Methodologies for estimating the indirect costs of traffic accidents].

    Science.gov (United States)

    Carozzi, Soledad; Elorza, María Eugenia; Moscoso, Nebel Silvana; Ripari, Nadia Vanina

    2017-01-01

    Traffic accidents generate multiple costs to society, including those associated with the loss of productivity. However, there is no consensus about the most appropriate methodology for estimating those costs. The aim of this study was to review methods for estimating indirect costs applied in crash cost studies. A thematic review of the literature was carried out between 1995 and 2012 in PubMed with the terms cost of illness, indirect cost, road traffic injuries, productivity loss. For the assessment of costs we used the the human capital method, on the basis of the wage-income lost during the time of treatment and recovery of patients and caregivers. In the case of premature death or total disability, the discount rate was applied to obtain the present value of lost future earnings. The computed years arose by subtracting to life expectancy at birth the average age of those affected who are not incorporated into the economically active life. The interest in minimizing the problem is reflected in the evolution of the implemented methodologies. We expect that this review is useful to estimate efficiently the real indirect costs of traffic accidents.

  17. A methodology to calibrate water saturation estimated from 4D seismic data

    International Nuclear Information System (INIS)

    Davolio, Alessandra; Maschio, Célio; José Schiozer, Denis

    2014-01-01

    Time-lapse seismic data can be used to estimate saturation changes within a reservoir, which is valuable information for reservoir management as it plays an important role in updating reservoir simulation models. The process of updating reservoir properties, history matching, can incorporate estimated saturation changes qualitatively or quantitatively. For quantitative approaches, reliable information from 4D seismic data is important. This work proposes a methodology to calibrate the volume of water in the estimated saturation maps, as these maps can be wrongly estimated due to problems with seismic signals (such as noise, errors associated with data processing and resolution issues). The idea is to condition the 4D seismic data to known information provided by engineering, in this case the known amount of injected and produced water in the field. The application of the proposed methodology in an inversion process (previously published) that estimates saturation from 4D seismic data is presented, followed by a discussion concerning the use of such data in a history matching process. The methodology is applied to a synthetic dataset to validate the results, the main of which are: (1) reduction of the effects of noise and errors in the estimated saturation, yielding more reliable data to be used quantitatively or qualitatively and (2) an improvement in the properties update after using this data in a history matching procedure. (paper)

  18. A Practical Approach for Parameter Identification with Limited Information

    DEFF Research Database (Denmark)

    Zeni, Lorenzo; Yang, Guangya; Tarnowski, Germán Claudio

    2014-01-01

    A practical parameter estimation procedure for a real excitation system is reported in this paper. The core algorithm is based on genetic algorithm (GA) which estimates the parameters of a real AC brushless excitation system with limited information about the system. Practical considerations are ...... parameters. The whole methodology is described and the estimation strategy is presented in this paper....

  19. Parameter estimation for an expanding universe

    Directory of Open Access Journals (Sweden)

    Jieci Wang

    2015-03-01

    Full Text Available We study the parameter estimation for excitations of Dirac fields in the expanding Robertson–Walker universe. We employ quantum metrology techniques to demonstrate the possibility for high precision estimation for the volume rate of the expanding universe. We show that the optimal precision of the estimation depends sensitively on the dimensionless mass m˜ and dimensionless momentum k˜ of the Dirac particles. The optimal precision for the ratio estimation peaks at some finite dimensionless mass m˜ and momentum k˜. We find that the precision of the estimation can be improved by choosing the probe state as an eigenvector of the hamiltonian. This occurs because the largest quantum Fisher information is obtained by performing projective measurements implemented by the projectors onto the eigenvectors of specific probe states.

  20. Automated Land Cover Change Detection and Mapping from Hidden Parameter Estimates of Normalized Difference Vegetation Index (NDVI) Time-Series

    Science.gov (United States)

    Chakraborty, S.; Banerjee, A.; Gupta, S. K. S.; Christensen, P. R.; Papandreou-Suppappola, A.

    2017-12-01

    Multitemporal observations acquired frequently by satellites with short revisit periods such as the Moderate Resolution Imaging Spectroradiometer (MODIS), is an important source for modeling land cover. Due to the inherent seasonality of the land cover, harmonic modeling reveals hidden state parameters characteristic to it, which is used in classifying different land cover types and in detecting changes due to natural or anthropogenic factors. In this work, we use an eight day MODIS composite to create a Normalized Difference Vegetation Index (NDVI) time-series of ten years. Improved hidden parameter estimates of the nonlinear harmonic NDVI model are obtained using the Particle Filter (PF), a sequential Monte Carlo estimator. The nonlinear estimation based on PF is shown to improve parameter estimation for different land cover types compared to existing techniques that use the Extended Kalman Filter (EKF), due to linearization of the harmonic model. As these parameters are representative of a given land cover, its applicability in near real-time detection of land cover change is also studied by formulating a metric that captures parameter deviation due to change. The detection methodology is evaluated by considering change as a rare class problem. This approach is shown to detect change with minimum delay. Additionally, the degree of change within the change perimeter is non-uniform. By clustering the deviation in parameters due to change, this spatial variation in change severity is effectively mapped and validated with high spatial resolution change maps of the given regions.

  1. Method for Estimating the Parameters of LFM Radar Signal

    Directory of Open Access Journals (Sweden)

    Tan Chuan-Zhang

    2017-01-01

    Full Text Available In order to obtain reliable estimate of parameters, it is very important to protect the integrality of linear frequency modulation (LFM signal. Therefore, in the practical LFM radar signal processing, the length of data frame is often greater than the pulse width (PW of signal. In this condition, estimating the parameters by fractional Fourier transform (FrFT will cause the signal to noise ratio (SNR decrease. Aiming at this problem, we multiply the data frame by a Gaussian window to improve the SNR. Besides, for a further improvement of parameters estimation precision, a novel algorithm is derived via Lagrange interpolation polynomial, and we enhance the algorithm by a logarithmic transformation. Simulation results demonstrate that the derived algorithm significantly reduces the estimation errors of chirp-rate and initial frequency.

  2. Coupling heat and chemical tracer experiments for estimating heat transfer parameters in shallow alluvial aquifers.

    Science.gov (United States)

    Wildemeersch, S; Jamin, P; Orban, P; Hermans, T; Klepikova, M; Nguyen, F; Brouyère, S; Dassargues, A

    2014-11-15

    Geothermal energy systems, closed or open, are increasingly considered for heating and/or cooling buildings. The efficiency of such systems depends on the thermal properties of the subsurface. Therefore, feasibility and impact studies performed prior to their installation should include a field characterization of thermal properties and a heat transfer model using parameter values measured in situ. However, there is a lack of in situ experiments and methodology for performing such a field characterization, especially for open systems. This study presents an in situ experiment designed for estimating heat transfer parameters in shallow alluvial aquifers with focus on the specific heat capacity. This experiment consists in simultaneously injecting hot water and a chemical tracer into the aquifer and monitoring the evolution of groundwater temperature and concentration in the recovery well (and possibly in other piezometers located down gradient). Temperature and concentrations are then used for estimating the specific heat capacity. The first method for estimating this parameter is based on a modeling in series of the chemical tracer and temperature breakthrough curves at the recovery well. The second method is based on an energy balance. The values of specific heat capacity estimated for both methods (2.30 and 2.54MJ/m(3)/K) for the experimental site in the alluvial aquifer of the Meuse River (Belgium) are almost identical and consistent with values found in the literature. Temperature breakthrough curves in other piezometers are not required for estimating the specific heat capacity. However, they highlight that heat transfer in the alluvial aquifer of the Meuse River is complex and contrasted with different dominant process depending on the depth leading to significant vertical heat exchange between upper and lower part of the aquifer. Furthermore, these temperature breakthrough curves could be included in the calibration of a complex heat transfer model for

  3. Assumptions of the primordial spectrum and cosmological parameter estimation

    International Nuclear Information System (INIS)

    Shafieloo, Arman; Souradeep, Tarun

    2011-01-01

    The observables of the perturbed universe, cosmic microwave background (CMB) anisotropy and large structures depend on a set of cosmological parameters, as well as the assumed nature of primordial perturbations. In particular, the shape of the primordial power spectrum (PPS) is, at best, a well-motivated assumption. It is known that the assumed functional form of the PPS in cosmological parameter estimation can affect the best-fit-parameters and their relative confidence limits. In this paper, we demonstrate that a specific assumed form actually drives the best-fit parameters into distinct basins of likelihood in the space of cosmological parameters where the likelihood resists improvement via modifications to the PPS. The regions where considerably better likelihoods are obtained allowing free-form PPS lie outside these basins. In the absence of a preferred model of inflation, this raises a concern that current cosmological parameter estimates are strongly prejudiced by the assumed form of PPS. Our results strongly motivate approaches toward simultaneous estimation of the cosmological parameters and the shape of the primordial spectrum from upcoming cosmological data. It is equally important for theorists to keep an open mind towards early universe scenarios that produce features in the PPS. (paper)

  4. Bayesian estimation of Weibull distribution parameters

    International Nuclear Information System (INIS)

    Bacha, M.; Celeux, G.; Idee, E.; Lannoy, A.; Vasseur, D.

    1994-11-01

    In this paper, we expose SEM (Stochastic Expectation Maximization) and WLB-SIR (Weighted Likelihood Bootstrap - Sampling Importance Re-sampling) methods which are used to estimate Weibull distribution parameters when data are very censored. The second method is based on Bayesian inference and allow to take into account available prior informations on parameters. An application of this method, with real data provided by nuclear power plants operation feedback analysis has been realized. (authors). 8 refs., 2 figs., 2 tabs

  5. Comparison of the DOE and the EPA risk assessment methodologies and default parameters for the air exposure pathway

    International Nuclear Information System (INIS)

    Tan, Z.; Eckart, R.

    1993-01-01

    The U.S. Department of Energy (DOE) and the U.S. Environmental Protection Agency (EPA) each publish radiological health effects risk assessment methodologies. Those methodologies are in the form of computer program models or extensive documentation. This research paper compares the significant differences between the DOE and EPA methodologies and default parameters for the important air exposure pathway. The purpose of this analysis was to determine the fundamental differences in methodology and parameter values between the DOE and the EPA. This study reviewed the parameter values and default values that are utilized in the air exposure pathway and revealed the significant differences in risk assessment results when default values are used in the analysis of an actual site. The study details the sources and the magnitude of the parameter departures between the DOE and the EPA methodologies and their impact on dose or risk

  6. Hyperspectral signature analysis of skin parameters

    Science.gov (United States)

    Vyas, Saurabh; Banerjee, Amit; Garza, Luis; Kang, Sewon; Burlina, Philippe

    2013-02-01

    The temporal analysis of changes in biological skin parameters, including melanosome concentration, collagen concentration and blood oxygenation, may serve as a valuable tool in diagnosing the progression of malignant skin cancers and in understanding the pathophysiology of cancerous tumors. Quantitative knowledge of these parameters can also be useful in applications such as wound assessment, and point-of-care diagnostics, amongst others. We propose an approach to estimate in vivo skin parameters using a forward computational model based on Kubelka-Munk theory and the Fresnel Equations. We use this model to map the skin parameters to their corresponding hyperspectral signature. We then use machine learning based regression to develop an inverse map from hyperspectral signatures to skin parameters. In particular, we employ support vector machine based regression to estimate the in vivo skin parameters given their corresponding hyperspectral signature. We build on our work from SPIE 2012, and validate our methodology on an in vivo dataset. This dataset consists of 241 signatures collected from in vivo hyperspectral imaging of patients of both genders and Caucasian, Asian and African American ethnicities. In addition, we also extend our methodology past the visible region and through the short-wave infrared region of the electromagnetic spectrum. We find promising results when comparing the estimated skin parameters to the ground truth, demonstrating good agreement with well-established physiological precepts. This methodology can have potential use in non-invasive skin anomaly detection and for developing minimally invasive pre-screening tools.

  7. SCoPE: an efficient method of Cosmological Parameter Estimation

    International Nuclear Information System (INIS)

    Das, Santanu; Souradeep, Tarun

    2014-01-01

    Markov Chain Monte Carlo (MCMC) sampler is widely used for cosmological parameter estimation from CMB and other data. However, due to the intrinsic serial nature of the MCMC sampler, convergence is often very slow. Here we present a fast and independently written Monte Carlo method for cosmological parameter estimation named as Slick Cosmological Parameter Estimator (SCoPE), that employs delayed rejection to increase the acceptance rate of a chain, and pre-fetching that helps an individual chain to run on parallel CPUs. An inter-chain covariance update is also incorporated to prevent clustering of the chains allowing faster and better mixing of the chains. We use an adaptive method for covariance calculation to calculate and update the covariance automatically as the chains progress. Our analysis shows that the acceptance probability of each step in SCoPE is more than 95% and the convergence of the chains are faster. Using SCoPE, we carry out some cosmological parameter estimations with different cosmological models using WMAP-9 and Planck results. One of the current research interests in cosmology is quantifying the nature of dark energy. We analyze the cosmological parameters from two illustrative commonly used parameterisations of dark energy models. We also asses primordial helium fraction in the universe can be constrained by the present CMB data from WMAP-9 and Planck. The results from our MCMC analysis on the one hand helps us to understand the workability of the SCoPE better, on the other hand it provides a completely independent estimation of cosmological parameters from WMAP-9 and Planck data

  8. Bayesian parameter estimation in probabilistic risk assessment

    International Nuclear Information System (INIS)

    Siu, Nathan O.; Kelly, Dana L.

    1998-01-01

    Bayesian statistical methods are widely used in probabilistic risk assessment (PRA) because of their ability to provide useful estimates of model parameters when data are sparse and because the subjective probability framework, from which these methods are derived, is a natural framework to address the decision problems motivating PRA. This paper presents a tutorial on Bayesian parameter estimation especially relevant to PRA. It summarizes the philosophy behind these methods, approaches for constructing likelihood functions and prior distributions, some simple but realistic examples, and a variety of cautions and lessons regarding practical applications. References are also provided for more in-depth coverage of various topics

  9. Iterative methods for distributed parameter estimation in parabolic PDE

    Energy Technology Data Exchange (ETDEWEB)

    Vogel, C.R. [Montana State Univ., Bozeman, MT (United States); Wade, J.G. [Bowling Green State Univ., OH (United States)

    1994-12-31

    The goal of the work presented is the development of effective iterative techniques for large-scale inverse or parameter estimation problems. In this extended abstract, a detailed description of the mathematical framework in which the authors view these problem is presented, followed by an outline of the ideas and algorithms developed. Distributed parameter estimation problems often arise in mathematical modeling with partial differential equations. They can be viewed as inverse problems; the `forward problem` is that of using the fully specified model to predict the behavior of the system. The inverse or parameter estimation problem is: given the form of the model and some observed data from the system being modeled, determine the unknown parameters of the model. These problems are of great practical and mathematical interest, and the development of efficient computational algorithms is an active area of study.

  10. Combined Estimation of Hydrogeologic Conceptual Model, Parameter, and Scenario Uncertainty with Application to Uranium Transport at the Hanford Site 300 Area

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, Philip D.; Ye, Ming; Rockhold, Mark L.; Neuman, Shlomo P.; Cantrell, Kirk J.

    2007-07-30

    This report to the Nuclear Regulatory Commission (NRC) describes the development and application of a methodology to systematically and quantitatively assess predictive uncertainty in groundwater flow and transport modeling that considers the combined impact of hydrogeologic uncertainties associated with the conceptual-mathematical basis of a model, model parameters, and the scenario to which the model is applied. The methodology is based on a n extension of a Maximum Likelihood implementation of Bayesian Model Averaging. Model uncertainty is represented by postulating a discrete set of alternative conceptual models for a site with associated prior model probabilities that reflect a belief about the relative plausibility of each model based on its apparent consistency with available knowledge and data. Posterior model probabilities are computed and parameter uncertainty is estimated by calibrating each model to observed system behavior; prior parameter estimates are optionally included. Scenario uncertainty is represented as a discrete set of alternative future conditions affecting boundary conditions, source/sink terms, or other aspects of the models, with associated prior scenario probabilities. A joint assessment of uncertainty results from combining model predictions computed under each scenario using as weight the posterior model and prior scenario probabilities. The uncertainty methodology was applied to modeling of groundwater flow and uranium transport at the Hanford Site 300 Area. Eight alternative models representing uncertainty in the hydrogeologic and geochemical properties as well as the temporal variability were considered. Two scenarios represent alternative future behavior of the Columbia River adjacent to the site were considered. The scenario alternatives were implemented in the models through the boundary conditions. Results demonstrate the feasibility of applying a comprehensive uncertainty assessment to large-scale, detailed groundwater flow

  11. A distributed approach for parameters estimation in System Biology models

    International Nuclear Information System (INIS)

    Mosca, E.; Merelli, I.; Alfieri, R.; Milanesi, L.

    2009-01-01

    Due to the lack of experimental measurements, biological variability and experimental errors, the value of many parameters of the systems biology mathematical models is yet unknown or uncertain. A possible computational solution is the parameter estimation, that is the identification of the parameter values that determine the best model fitting respect to experimental data. We have developed an environment to distribute each run of the parameter estimation algorithm on a different computational resource. The key feature of the implementation is a relational database that allows the user to swap the candidate solutions among the working nodes during the computations. The comparison of the distributed implementation with the parallel one showed that the presented approach enables a faster and better parameter estimation of systems biology models.

  12. Time varying acceleration coefficients particle swarm optimisation (TVACPSO): A new optimisation algorithm for estimating parameters of PV cells and modules

    International Nuclear Information System (INIS)

    Jordehi, Ahmad Rezaee

    2016-01-01

    Highlights: • A modified PSO has been proposed for parameter estimation of PV cells and modules. • In the proposed modified PSO, acceleration coefficients are changed during run. • The proposed modified PSO mitigates premature convergence problem. • Parameter estimation problem has been solved for both PV cells and PV modules. • The results show that proposed PSO outperforms other state of the art algorithms. - Abstract: Estimating circuit model parameters of PV cells/modules represents a challenging problem. PV cell/module parameter estimation problem is typically translated into an optimisation problem and is solved by metaheuristic optimisation problems. Particle swarm optimisation (PSO) is considered as a popular and well-established optimisation algorithm. Despite all its advantages, PSO suffers from premature convergence problem meaning that it may get trapped in local optima. Personal and social acceleration coefficients are two control parameters that, due to their effect on explorative and exploitative capabilities, play important roles in computational behavior of PSO. In this paper, in an attempt toward premature convergence mitigation in PSO, its personal acceleration coefficient is decreased during the course of run, while its social acceleration coefficient is increased. In this way, an appropriate tradeoff between explorative and exploitative capabilities of PSO is established during the course of run and premature convergence problem is significantly mitigated. The results vividly show that in parameter estimation of PV cells and modules, the proposed time varying acceleration coefficients PSO (TVACPSO) offers more accurate parameters than conventional PSO, teaching learning-based optimisation (TLBO) algorithm, imperialistic competitive algorithm (ICA), grey wolf optimisation (GWO), water cycle algorithm (WCA), pattern search (PS) and Newton algorithm. For validation of the proposed methodology, parameter estimation has been done both for

  13. Estimation of pharmacokinetic parameters from non-compartmental variables using Microsoft Excel.

    Science.gov (United States)

    Dansirikul, Chantaratsamon; Choi, Malcolm; Duffull, Stephen B

    2005-06-01

    This study was conducted to develop a method, termed 'back analysis (BA)', for converting non-compartmental variables to compartment model dependent pharmacokinetic parameters for both one- and two-compartment models. A Microsoft Excel spreadsheet was implemented with the use of Solver and visual basic functions. The performance of the BA method in estimating pharmacokinetic parameter values was evaluated by comparing the parameter values obtained to a standard modelling software program, NONMEM, using simulated data. The results show that the BA method was reasonably precise and provided low bias in estimating fixed and random effect parameters for both one- and two-compartment models. The pharmacokinetic parameters estimated from the BA method were similar to those of NONMEM estimation.

  14. Prototype application of best estimate and uncertainty safety analysis methodology to large LOCA analysis

    International Nuclear Information System (INIS)

    Luxat, J.C.; Huget, R.G.

    2001-01-01

    Development of a methodology to perform best estimate and uncertainty nuclear safety analysis has been underway at Ontario Power Generation for the past two and one half years. A key driver for the methodology development, and one of the major challenges faced, is the need to re-establish demonstrated safety margins that have progressively been undermined through excessive and compounding conservatism in deterministic analyses. The major focus of the prototyping applications was to quantify the safety margins that exist at the probable range of high power operating conditions, rather than the highly improbable operating states associated with Limit of the Envelope (LOE) assumptions. In LOE, all parameters of significance to the consequences of a postulated accident are assumed to simultaneously deviate to their limiting values. Another equally important objective of the prototyping was to demonstrate the feasibility of conducting safety analysis as an incremental analysis activity, as opposed to a major re-analysis activity. The prototype analysis solely employed prior analyses of Bruce B large break LOCA events - no new computer simulations were undertaken. This is a significant and novel feature of the prototyping work. This methodology framework has been applied to a postulated large break LOCA in a Bruce generating unit on a prototype basis. This paper presents results of the application. (author)

  15. Parameter Estimates in Differential Equation Models for Chemical Kinetics

    Science.gov (United States)

    Winkel, Brian

    2011-01-01

    We discuss the need for devoting time in differential equations courses to modelling and the completion of the modelling process with efforts to estimate the parameters in the models using data. We estimate the parameters present in several differential equation models of chemical reactions of order n, where n = 0, 1, 2, and apply more general…

  16. Estimating physiological skin parameters from hyperspectral signatures

    Science.gov (United States)

    Vyas, Saurabh; Banerjee, Amit; Burlina, Philippe

    2013-05-01

    We describe an approach for estimating human skin parameters, such as melanosome concentration, collagen concentration, oxygen saturation, and blood volume, using hyperspectral radiometric measurements (signatures) obtained from in vivo skin. We use a computational model based on Kubelka-Munk theory and the Fresnel equations. This model forward maps the skin parameters to a corresponding multiband reflectance spectra. Machine-learning-based regression is used to generate the inverse map, and hence estimate skin parameters from hyperspectral signatures. We test our methods using synthetic and in vivo skin signatures obtained in the visible through the short wave infrared domains from 24 patients of both genders and Caucasian, Asian, and African American ethnicities. Performance validation shows promising results: good agreement with the ground truth and well-established physiological precepts. These methods have potential use in the characterization of skin abnormalities and in minimally-invasive prescreening of malignant skin cancers.

  17. Improved estimation of the noncentrality parameter distribution from a large number of t-statistics, with applications to false discovery rate estimation in microarray data analysis.

    Science.gov (United States)

    Qu, Long; Nettleton, Dan; Dekkers, Jack C M

    2012-12-01

    Given a large number of t-statistics, we consider the problem of approximating the distribution of noncentrality parameters (NCPs) by a continuous density. This problem is closely related to the control of false discovery rates (FDR) in massive hypothesis testing applications, e.g., microarray gene expression analysis. Our methodology is similar to, but improves upon, the existing approach by Ruppert, Nettleton, and Hwang (2007, Biometrics, 63, 483-495). We provide parametric, nonparametric, and semiparametric estimators for the distribution of NCPs, as well as estimates of the FDR and local FDR. In the parametric situation, we assume that the NCPs follow a distribution that leads to an analytically available marginal distribution for the test statistics. In the nonparametric situation, we use convex combinations of basis density functions to estimate the density of the NCPs. A sequential quadratic programming procedure is developed to maximize the penalized likelihood. The smoothing parameter is selected with the approximate network information criterion. A semiparametric estimator is also developed to combine both parametric and nonparametric fits. Simulations show that, under a variety of situations, our density estimates are closer to the underlying truth and our FDR estimates are improved compared with alternative methods. Data-based simulations and the analyses of two microarray datasets are used to evaluate the performance in realistic situations. © 2012, The International Biometric Society.

  18. A software for parameter estimation in dynamic models

    Directory of Open Access Journals (Sweden)

    M. Yuceer

    2008-12-01

    Full Text Available A common problem in dynamic systems is to determine parameters in an equation used to represent experimental data. The goal is to determine the values of model parameters that provide the best fit to measured data, generally based on some type of least squares or maximum likelihood criterion. In the most general case, this requires the solution of a nonlinear and frequently non-convex optimization problem. Some of the available software lack in generality, while others do not provide ease of use. A user-interactive parameter estimation software was needed for identifying kinetic parameters. In this work we developed an integration based optimization approach to provide a solution to such problems. For easy implementation of the technique, a parameter estimation software (PARES has been developed in MATLAB environment. When tested with extensive example problems from literature, the suggested approach is proven to provide good agreement between predicted and observed data within relatively less computing time and iterations.

  19. Nonlinear adaptive control system design with asymptotically stable parameter estimation error

    Science.gov (United States)

    Mishkov, Rumen; Darmonski, Stanislav

    2018-01-01

    The paper presents a new general method for nonlinear adaptive system design with asymptotic stability of the parameter estimation error. The advantages of the approach include asymptotic unknown parameter estimation without persistent excitation and capability to directly control the estimates transient response time. The method proposed modifies the basic parameter estimation dynamics designed via a known nonlinear adaptive control approach. The modification is based on the generalised prediction error, a priori constraints with a hierarchical parameter projection algorithm, and the stable data accumulation concepts. The data accumulation principle is the main tool for achieving asymptotic unknown parameter estimation. It relies on the parametric identifiability system property introduced. Necessary and sufficient conditions for exponential stability of the data accumulation dynamics are derived. The approach is applied in a nonlinear adaptive speed tracking vector control of a three-phase induction motor.

  20. A self-organizing state-space-model approach for parameter estimation in hodgkin-huxley-type models of single neurons.

    Directory of Open Access Journals (Sweden)

    Dimitrios V Vavoulis

    Full Text Available Traditional approaches to the problem of parameter estimation in biophysical models of neurons and neural networks usually adopt a global search algorithm (for example, an evolutionary algorithm, often in combination with a local search method (such as gradient descent in order to minimize the value of a cost function, which measures the discrepancy between various features of the available experimental data and model output. In this study, we approach the problem of parameter estimation in conductance-based models of single neurons from a different perspective. By adopting a hidden-dynamical-systems formalism, we expressed parameter estimation as an inference problem in these systems, which can then be tackled using a range of well-established statistical inference methods. The particular method we used was Kitagawa's self-organizing state-space model, which was applied on a number of Hodgkin-Huxley-type models using simulated or actual electrophysiological data. We showed that the algorithm can be used to estimate a large number of parameters, including maximal conductances, reversal potentials, kinetics of ionic currents, measurement and intrinsic noise, based on low-dimensional experimental data and sufficiently informative priors in the form of pre-defined constraints imposed on model parameters. The algorithm remained operational even when very noisy experimental data were used. Importantly, by combining the self-organizing state-space model with an adaptive sampling algorithm akin to the Covariance Matrix Adaptation Evolution Strategy, we achieved a significant reduction in the variance of parameter estimates. The algorithm did not require the explicit formulation of a cost function and it was straightforward to apply on compartmental models and multiple data sets. Overall, the proposed methodology is particularly suitable for resolving high-dimensional inference problems based on noisy electrophysiological data and, therefore, a

  1. Nuclear data adjustment methodology utilizing resonance parameter sensitivities and uncertainties

    International Nuclear Information System (INIS)

    Broadhead, B.L.

    1983-01-01

    This work presents the development and demonstration of a Nuclear Data Adjustment Method that allows inclusion of both energy and spatial self-shielding into the adjustment procedure. The resulting adjustments are for the basic parameters (i.e. resonance parameters) in the resonance regions and for the group cross sections elsewhere. The majority of this development effort concerns the production of resonance parameter sensitivity information which allows the linkage between the responses of interest and the basic parameters. The resonance parameter sensitivity methodology developed herein usually provides accurate results when compared to direct recalculations using existng and well-known cross section processing codes. However, it has been shown in several cases that self-shielded cross sections can be very non-linear functions of the basic parameters. For this reason caution must be used in any study which assumes that a linear relatonship exists between a given self-shielded group cross section and its corresponding basic data parameters. The study also has pointed out the need for more approximate techniques which will allow the required sensitivity information to be obtained in a more cost effective manner

  2. Novel Method for 5G Systems NLOS Channels Parameter Estimation

    Directory of Open Access Journals (Sweden)

    Vladeta Milenkovic

    2017-01-01

    Full Text Available For the development of new 5G systems to operate in mm bands, there is a need for accurate radio propagation modelling at these bands. In this paper novel approach for NLOS channels parameter estimation will be presented. Estimation will be performed based on LCR performance measure, which will enable us to estimate propagation parameters in real time and to avoid weaknesses of ML and moment method estimation approaches.

  3. Methodology used in IRSN nuclear accident cost estimates in France

    International Nuclear Information System (INIS)

    2015-01-01

    This report describes the methodology used by IRSN to estimate the cost of potential nuclear accidents in France. It concerns possible accidents involving pressurized water reactors leading to radioactive releases in the environment. These accidents have been grouped in two accident families called: severe accidents and major accidents. Two model scenarios have been selected to represent each of these families. The report discusses the general methodology of nuclear accident cost estimation. The crucial point is that all cost should be considered: if not, the cost is underestimated which can lead to negative consequences for the value attributed to safety and for crisis preparation. As a result, the overall cost comprises many components: the most well-known is offsite radiological costs, but there are many others. The proposed estimates have thus required using a diversity of methods which are described in this report. Figures are presented at the end of this report. Among other things, they show that purely radiological costs only represent a non-dominant part of foreseeable economic consequences. (authors)

  4. Application of isotopic information for estimating parameters in Philip infiltration model

    Directory of Open Access Journals (Sweden)

    Tao Wang

    2016-10-01

    Full Text Available Minimizing parameter uncertainty is crucial in the application of hydrologic models. Isotopic information in various hydrologic components of the water cycle can expand our knowledge of the dynamics of water flow in the system, provide additional information for parameter estimation, and improve parameter identifiability. This study combined the Philip infiltration model with an isotopic mixing model using an isotopic mass balance approach for estimating parameters in the Philip infiltration model. Two approaches to parameter estimation were compared: (a using isotopic information to determine the soil water transmission and then hydrologic information to estimate the soil sorptivity, and (b using hydrologic information to determine the soil water transmission and the soil sorptivity. Results of parameter estimation were verified through a rainfall infiltration experiment in a laboratory under rainfall with constant isotopic compositions and uniform initial soil water content conditions. Experimental results showed that approach (a, using isotopic and hydrologic information, estimated the soil water transmission in the Philip infiltration model in a manner that matched measured values well. The results of parameter estimation of approach (a were better than those of approach (b. It was also found that the analytical precision of hydrogen and oxygen stable isotopes had a significant effect on parameter estimation using isotopic information.

  5. On the estimation of water pure compound parameters in association theories

    DEFF Research Database (Denmark)

    Grenner, Andreas; Kontogeorgis, Georgios; Michelsen, Michael Locht

    2007-01-01

    Determination of the appropriate number of association sites and estimation of parameters for association (SAFT-type) theories is not a trivial matter. Building further on a recently published manuscript by Clark et al., this work investigates aspects of the parameter estimation for water using t...... different association theories. Their performance for various properties as well as against the results presented earlier is demonstrated.......Determination of the appropriate number of association sites and estimation of parameters for association (SAFT-type) theories is not a trivial matter. Building further on a recently published manuscript by Clark et al., this work investigates aspects of the parameter estimation for water using two...

  6. Methodology for estimating human perception to tremors in high-rise buildings

    Science.gov (United States)

    Du, Wenqi; Goh, Key Seng; Pan, Tso-Chien

    2017-07-01

    Human perception to tremors during earthquakes in high-rise buildings is usually associated with psychological discomfort such as fear and anxiety. This paper presents a methodology for estimating the level of perception to tremors for occupants living in high-rise buildings subjected to ground motion excitations. Unlike other approaches based on empirical or historical data, the proposed methodology performs a regression analysis using the analytical results of two generic models of 15 and 30 stories. The recorded ground motions in Singapore are collected and modified for structural response analyses. Simple predictive models are then developed to estimate the perception level to tremors based on a proposed ground motion intensity parameter—the average response spectrum intensity in the period range between 0.1 and 2.0 s. These models can be used to predict the percentage of occupants in high-rise buildings who may perceive the tremors at a given ground motion intensity. Furthermore, the models are validated with two recent tremor events reportedly felt in Singapore. It is found that the estimated results match reasonably well with the reports in the local newspapers and from the authorities. The proposed methodology is applicable to urban regions where people living in high-rise buildings might feel tremors during earthquakes.

  7. The Response Surface Methodology speeds up the search for optimal parameters in the photoinactivation of E. coli by Photodynamic Therapy.

    Science.gov (United States)

    Amaral, Larissa S; Azevedo, Eduardo B; Perussi, Janice R

    2018-02-27

    Antimicrobial Photodynamic Inactivation (a-PDI) is based on the oxidative destruction of biological molecules by reactive oxygen species generated by the photo-excitation of a photosensitive molecule. When the a-PDT is performed along with the use of mathematical models, the optimal conditions for maximum inactivation are easily found. Experimental designs allow a multivariate analysis of the experimental parameters. This is usually made using a univariate approach, which demands a large number of experiments, being time and money consuming. This paper presents the use of the response surface methodology for improving the search for the best conditions to reduce E. coli survival levels by a-PDT using methylene blue (MB) and toluidine blue (TB) as photosensitizers and white light. The goal was achieved by analyzing the effects and interactions of the three main parameters involved in the process: incubation time (IT), photosensitizer concentration (C PS ), and light dose (LD). The optimization procedure began with a full 2 3 factorial design, followed by a central composite one, in which the optimal conditions were estimated. For MB, C PS was the most important parameter followed by LD and IT whereas, for TB, the main parameter was LD followed by C PS and IT. Using the estimated optimal conditions for inactivation, MB was able to inactivate 99.999999% CFU mL -1 of E. coli with IT of 28 min, LD of 31 J cm -2 , and C PS of 32 μmol L -1 , while TB required 18 min, 39 J cm -2 , and 37 μmol L -1 . The feasibility of using the response surface methodology with a-PDT was demonstrated, enabling enhanced photoinactivation efficiency and fast results with a minimal number of experiments. Copyright © 2018. Published by Elsevier B.V.

  8. Parameter Estimation for a Computable General Equilibrium Model

    DEFF Research Database (Denmark)

    Arndt, Channing; Robinson, Sherman; Tarp, Finn

    We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of nonlinear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...

  9. Modular Estimation Strategy of Vehicle Dynamic Parameters for Motion Control Applications

    Directory of Open Access Journals (Sweden)

    Rawash Mustafa

    2018-01-01

    Full Text Available The presence of motion control or active safety systems in vehicles have become increasingly important for improving vehicle performance and handling and negotiating dangerous driving situations. The performance of such systems would be improved if combined with knowledge of vehicle dynamic parameters. Since some of these parameters are difficult to measure, due to technical or economic reasons, estimation of those parameters might be the only practical alternative. In this paper, an estimation strategy of important vehicle dynamic parameters, pertaining to motion control applications, is presented. The estimation strategy is of a modular structure such that each module is concerned with estimating a single vehicle parameter. Parameters estimated include: longitudinal, lateral, and vertical tire forces – longitudinal velocity – vehicle mass. The advantage of this strategy is its independence of tire parameters or wear, road surface condition, and vehicle mass variation. Also, because of its modular structure, each module could be later updated or exchanged for a more effective one. Results from simulations on a 14-DOF vehicle model are provided here to validate the strategy and show its robustness and accuracy.

  10. Recovering Parameters of Johnson's SB Distribution

    Science.gov (United States)

    Bernard R. Parresol

    2003-01-01

    A new parameter recovery model for Johnson's SB distribution is developed. This latest alternative approach permits recovery of the range and both shape parameters. Previous models recovered only the two shape parameters. Also, a simple procedure for estimating the distribution minimum from sample values is presented. The new methodology...

  11. Estimation of object motion parameters from noisy images.

    Science.gov (United States)

    Broida, T J; Chellappa, R

    1986-01-01

    An approach is presented for the estimation of object motion parameters based on a sequence of noisy images. The problem considered is that of a rigid body undergoing unknown rotational and translational motion. The measurement data consists of a sequence of noisy image coordinates of two or more object correspondence points. By modeling the object dynamics as a function of time, estimates of the model parameters (including motion parameters) can be extracted from the data using recursive and/or batch techniques. This permits a desired degree of smoothing to be achieved through the use of an arbitrarily large number of images. Some assumptions regarding object structure are presently made. Results are presented for a recursive estimation procedure: the case considered here is that of a sequence of one dimensional images of a two dimensional object. Thus, the object moves in one transverse dimension, and in depth, preserving the fundamental ambiguity of the central projection image model (loss of depth information). An iterated extended Kalman filter is used for the recursive solution. Noise levels of 5-10 percent of the object image size are used. Approximate Cramer-Rao lower bounds are derived for the model parameter estimates as a function of object trajectory and noise level. This approach may be of use in situations where it is difficult to resolve large numbers of object match points, but relatively long sequences of images (10 to 20 or more) are available.

  12. A methodology for the synthesis of heat exchanger networks having large numbers of uncertain parameters

    International Nuclear Information System (INIS)

    Novak Pintarič, Zorka; Kravanja, Zdravko

    2015-01-01

    This paper presents a robust computational methodology for the synthesis and design of flexible HEN (Heat Exchanger Networks) having large numbers of uncertain parameters. This methodology combines several heuristic methods which progressively lead to a flexible HEN design at a specific level of confidence. During the first step, a HEN topology is generated under nominal conditions followed by determining those points critical for flexibility. A significantly reduced multi-scenario model for flexible HEN design is formulated at the nominal point with the flexibility constraints at the critical points. The optimal design obtained is tested by stochastic Monte Carlo optimization and the flexibility index through solving one-scenario problems within a loop. This presented methodology is novel regarding the enormous reduction of scenarios in HEN design problems, and computational effort. Despite several simplifications, the capability of designing flexible HENs with large numbers of uncertain parameters, which are typical throughout industry, is not compromised. An illustrative case study is presented for flexible HEN synthesis comprising 42 uncertain parameters. - Highlights: • Methodology for HEN (Heat Exchanger Network) design under uncertainty is presented. • The main benefit is solving HENs having large numbers of uncertain parameters. • Drastically reduced multi-scenario HEN design problem is formulated through several steps. • Flexibility of HEN is guaranteed at a specific level of confidence.

  13. State and parameter estimation in biotechnical batch reactors

    NARCIS (Netherlands)

    Keesman, K.J.

    2000-01-01

    In this paper the problem of state and parameter estimation in biotechnical batch reactors is considered. Models describing the biotechnical process behaviour are usually nonlinear with time-varying parameters. Hence, the resulting large dimensions of the augmented state vector, roughly > 7, in

  14. Parameter Estimation of Damped Compound Pendulum Using Bat Algorithm

    Directory of Open Access Journals (Sweden)

    Saad Mohd Sazli

    2016-01-01

    Full Text Available In this study, the parameter identification of the damped compound pendulum system is proposed using one of the most promising nature inspired algorithms which is Bat Algorithm (BA. The procedure used to achieve the parameter identification of the experimental system consists of input-output data collection, ARX model order selection and parameter estimation using bat algorithm (BA method. PRBS signal is used as an input signal to regulate the motor speed. Whereas, the output signal is taken from position sensor. Both, input and output data is used to estimate the parameter of the autoregressive with exogenous input (ARX model. The performance of the model is validated using mean squares error (MSE between the actual and predicted output responses of the models. Finally, comparative study is conducted between BA and the conventional estimation method (i.e. Least Square. Based on the results obtained, MSE produce from Bat Algorithm (BA is outperformed the Least Square (LS method.

  15. Global parameter estimation for thermodynamic models of transcriptional regulation.

    Science.gov (United States)

    Suleimenov, Yerzhan; Ay, Ahmet; Samee, Md Abul Hassan; Dresch, Jacqueline M; Sinha, Saurabh; Arnosti, David N

    2013-07-15

    Deciphering the mechanisms involved in gene regulation holds the key to understanding the control of central biological processes, including human disease, population variation, and the evolution of morphological innovations. New experimental techniques including whole genome sequencing and transcriptome analysis have enabled comprehensive modeling approaches to study gene regulation. In many cases, it is useful to be able to assign biological significance to the inferred model parameters, but such interpretation should take into account features that affect these parameters, including model construction and sensitivity, the type of fitness calculation, and the effectiveness of parameter estimation. This last point is often neglected, as estimation methods are often selected for historical reasons or for computational ease. Here, we compare the performance of two parameter estimation techniques broadly representative of local and global approaches, namely, a quasi-Newton/Nelder-Mead simplex (QN/NMS) method and a covariance matrix adaptation-evolutionary strategy (CMA-ES) method. The estimation methods were applied to a set of thermodynamic models of gene transcription applied to regulatory elements active in the Drosophila embryo. Measuring overall fit, the global CMA-ES method performed significantly better than the local QN/NMS method on high quality data sets, but this difference was negligible on lower quality data sets with increased noise or on data sets simplified by stringent thresholding. Our results suggest that the choice of parameter estimation technique for evaluation of gene expression models depends both on quality of data, the nature of the models [again, remains to be established] and the aims of the modeling effort. Copyright © 2013 Elsevier Inc. All rights reserved.

  16. Uncertainty estimation of core safety parameters using cross-correlations of covariance matrix

    International Nuclear Information System (INIS)

    Yamamoto, A.; Yasue, Y.; Endo, T.; Kodama, Y.; Ohoka, Y.; Tatsumi, M.

    2012-01-01

    An uncertainty estimation method for core safety parameters, for which measurement values are not obtained, is proposed. We empirically recognize the correlations among the prediction errors among core safety parameters, e.g., a correlation between the control rod worth and assembly relative power of corresponding position. Correlations of uncertainties among core safety parameters are theoretically estimated using the covariance of cross sections and sensitivity coefficients for core parameters. The estimated correlations among core safety parameters are verified through the direct Monte-Carlo sampling method. Once the correlation of uncertainties among core safety parameters is known, we can estimate the uncertainty of a safety parameter for which measurement value is not obtained. Furthermore, the correlations can be also used for the reduction of uncertainties of core safety parameters. (authors)

  17. Parameter Estimation for a Computable General Equilibrium Model

    DEFF Research Database (Denmark)

    Arndt, Channing; Robinson, Sherman; Tarp, Finn

    2002-01-01

    We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of non-linear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...

  18. Optimal design criteria - prediction vs. parameter estimation

    Science.gov (United States)

    Waldl, Helmut

    2014-05-01

    G-optimality is a popular design criterion for optimal prediction, it tries to minimize the kriging variance over the whole design region. A G-optimal design minimizes the maximum variance of all predicted values. If we use kriging methods for prediction it is self-evident to use the kriging variance as a measure of uncertainty for the estimates. Though the computation of the kriging variance and even more the computation of the empirical kriging variance is computationally very costly and finding the maximum kriging variance in high-dimensional regions can be time demanding such that we cannot really find the G-optimal design with nowadays available computer equipment in practice. We cannot always avoid this problem by using space-filling designs because small designs that minimize the empirical kriging variance are often non-space-filling. D-optimality is the design criterion related to parameter estimation. A D-optimal design maximizes the determinant of the information matrix of the estimates. D-optimality in terms of trend parameter estimation and D-optimality in terms of covariance parameter estimation yield basically different designs. The Pareto frontier of these two competing determinant criteria corresponds with designs that perform well under both criteria. Under certain conditions searching the G-optimal design on the above Pareto frontier yields almost as good results as searching the G-optimal design in the whole design region. In doing so the maximum of the empirical kriging variance has to be computed only a few times though. The method is demonstrated by means of a computer simulation experiment based on data provided by the Belgian institute Management Unit of the North Sea Mathematical Models (MUMM) that describe the evolution of inorganic and organic carbon and nutrients, phytoplankton, bacteria and zooplankton in the Southern Bight of the North Sea.

  19. A statistical methodology for quantification of uncertainty in best estimate code physical models

    International Nuclear Information System (INIS)

    Vinai, Paolo; Macian-Juan, Rafael; Chawla, Rakesh

    2007-01-01

    A novel uncertainty assessment methodology, based on a statistical non-parametric approach, is presented in this paper. It achieves quantification of code physical model uncertainty by making use of model performance information obtained from studies of appropriate separate-effect tests. Uncertainties are quantified in the form of estimated probability density functions (pdf's), calculated with a newly developed non-parametric estimator. The new estimator objectively predicts the probability distribution of the model's 'error' (its uncertainty) from databases reflecting the model's accuracy on the basis of available experiments. The methodology is completed by applying a novel multi-dimensional clustering technique based on the comparison of model error samples with the Kruskall-Wallis test. This takes into account the fact that a model's uncertainty depends on system conditions, since a best estimate code can give predictions for which the accuracy is affected by the regions of the physical space in which the experiments occur. The final result is an objective, rigorous and accurate manner of assigning uncertainty to coded models, i.e. the input information needed by code uncertainty propagation methodologies used for assessing the accuracy of best estimate codes in nuclear systems analysis. The new methodology has been applied to the quantification of the uncertainty in the RETRAN-3D void model and then used in the analysis of an independent separate-effect experiment. This has clearly demonstrated the basic feasibility of the approach, as well as its advantages in yielding narrower uncertainty bands in quantifying the code's accuracy for void fraction predictions

  20. Parameter estimation of Lorenz chaotic system using a hybrid swarm intelligence algorithm

    International Nuclear Information System (INIS)

    Lazzús, Juan A.; Rivera, Marco; López-Caraballo, Carlos H.

    2016-01-01

    A novel hybrid swarm intelligence algorithm for chaotic system parameter estimation is present. For this purpose, the parameters estimation on Lorenz systems is formulated as a multidimensional problem, and a hybrid approach based on particle swarm optimization with ant colony optimization (PSO–ACO) is implemented to solve this problem. Firstly, the performance of the proposed PSO–ACO algorithm is tested on a set of three representative benchmark functions, and the impact of the parameter settings on PSO–ACO efficiency is studied. Secondly, the parameter estimation is converted into an optimization problem on a three-dimensional Lorenz system. Numerical simulations on Lorenz model and comparisons with results obtained by other algorithms showed that PSO–ACO is a very powerful tool for parameter estimation with high accuracy and low deviations. - Highlights: • PSO–ACO combined particle swarm optimization with ant colony optimization. • This study is the first research of PSO–ACO to estimate parameters of chaotic systems. • PSO–ACO algorithm can identify the parameters of the three-dimensional Lorenz system with low deviations. • PSO–ACO is a very powerful tool for the parameter estimation on other chaotic system.

  1. Parameter estimation of Lorenz chaotic system using a hybrid swarm intelligence algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Lazzús, Juan A., E-mail: jlazzus@dfuls.cl; Rivera, Marco; López-Caraballo, Carlos H.

    2016-03-11

    A novel hybrid swarm intelligence algorithm for chaotic system parameter estimation is present. For this purpose, the parameters estimation on Lorenz systems is formulated as a multidimensional problem, and a hybrid approach based on particle swarm optimization with ant colony optimization (PSO–ACO) is implemented to solve this problem. Firstly, the performance of the proposed PSO–ACO algorithm is tested on a set of three representative benchmark functions, and the impact of the parameter settings on PSO–ACO efficiency is studied. Secondly, the parameter estimation is converted into an optimization problem on a three-dimensional Lorenz system. Numerical simulations on Lorenz model and comparisons with results obtained by other algorithms showed that PSO–ACO is a very powerful tool for parameter estimation with high accuracy and low deviations. - Highlights: • PSO–ACO combined particle swarm optimization with ant colony optimization. • This study is the first research of PSO–ACO to estimate parameters of chaotic systems. • PSO–ACO algorithm can identify the parameters of the three-dimensional Lorenz system with low deviations. • PSO–ACO is a very powerful tool for the parameter estimation on other chaotic system.

  2. Maximum-likelihood estimation of the hyperbolic parameters from grouped observations

    DEFF Research Database (Denmark)

    Jensen, Jens Ledet

    1988-01-01

    a least-squares problem. The second procedure Hypesti first approaches the maximum-likelihood estimate by iterating in the profile-log likelihood function for the scale parameter. Close to the maximum of the likelihood function, the estimation is brought to an end by iteration, using all four parameters...

  3. Methodology for estimating soil carbon for the forest carbon budget model of the United States, 2001

    Science.gov (United States)

    L. S. Heath; R. A. Birdsey; D. W. Williams

    2002-01-01

    The largest carbon (C) pool in United States forests is the soil C pool. We present methodology and soil C pool estimates used in the FORCARB model, which estimates and projects forest carbon budgets for the United States. The methodology balances knowledge, uncertainties, and ease of use. The estimates are calculated using the USDA Natural Resources Conservation...

  4. Estimation Methodology for the Electricity Consumption with the Daylight- and Occupancy-Controlled Artificial Lighting

    DEFF Research Database (Denmark)

    Larsen, Olena Kalyanova; Jensen, Rasmus Lund; Strømberg, Ida Kristine

    2017-01-01

    Artificial lighting represents 15-30% of the total electricity consumption in buildings in Scandinavia. It is possible to avoid a large share of electricity use for lighting by application of daylight control systems for artificial lighting. Existing methodology for estimation of electricity...... consumption with application of such control systems in Norway is based on Norwegian standard NS 3031:2014 and can only provide results from a rough estimate. This paper aims to introduce a new estimation methodology for the electricity usage with the daylight- and occupancy-controlled artificial lighting...

  5. Review of the Palisades pressure vessel accumulated fluence estimate and of the least squares methodology employed

    Energy Technology Data Exchange (ETDEWEB)

    Griffin, P.J.

    1998-05-01

    This report provides a review of the Palisades submittal to the Nuclear Regulatory Commission requesting endorsement of their accumulated neutron fluence estimates based on a least squares adjustment methodology. This review highlights some minor issues in the applied methodology and provides some recommendations for future work. The overall conclusion is that the Palisades fluence estimation methodology provides a reasonable approach to a {open_quotes}best estimate{close_quotes} of the accumulated pressure vessel neutron fluence and is consistent with the state-of-the-art analysis as detailed in community consensus ASTM standards.

  6. Dual ant colony operational modal analysis parameter estimation method

    Science.gov (United States)

    Sitarz, Piotr; Powałka, Bartosz

    2018-01-01

    Operational Modal Analysis (OMA) is a common technique used to examine the dynamic properties of a system. Contrary to experimental modal analysis, the input signal is generated in object ambient environment. Operational modal analysis mainly aims at determining the number of pole pairs and at estimating modal parameters. Many methods are used for parameter identification. Some methods operate in time while others in frequency domain. The former use correlation functions, the latter - spectral density functions. However, while some methods require the user to select poles from a stabilisation diagram, others try to automate the selection process. Dual ant colony operational modal analysis parameter estimation method (DAC-OMA) presents a new approach to the problem, avoiding issues involved in the stabilisation diagram. The presented algorithm is fully automated. It uses deterministic methods to define the interval of estimated parameters, thus reducing the problem to optimisation task which is conducted with dedicated software based on ant colony optimisation algorithm. The combination of deterministic methods restricting parameter intervals and artificial intelligence yields very good results, also for closely spaced modes and significantly varied mode shapes within one measurement point.

  7. Structural parameter identifiability analysis for dynamic reaction networks

    DEFF Research Database (Denmark)

    Davidescu, Florin Paul; Jørgensen, Sten Bay

    2008-01-01

    method based on Lie derivatives. The proposed systematic two phase methodology is illustrated on a mass action based model for an enzymatically catalyzed reaction pathway network where only a limited set of variables is measured. The methodology clearly pinpoints the structurally identifiable parameters...... where for a given set of measured variables it is desirable to investigate which parameters may be estimated prior to spending computational effort on the actual estimation. This contribution addresses the structural parameter identifiability problem for the typical case of reaction network models....... The proposed analysis is performed in two phases. The first phase determines the structurally identifiable reaction rates based on reaction network stoichiometry. The second phase assesses the structural parameter identifiability of the specific kinetic rate expressions using a generating series expansion...

  8. Cosmological parameter estimation using particle swarm optimization

    Science.gov (United States)

    Prasad, Jayanti; Souradeep, Tarun

    2012-06-01

    Constraining theoretical models, which are represented by a set of parameters, using observational data is an important exercise in cosmology. In Bayesian framework this is done by finding the probability distribution of parameters which best fits to the observational data using sampling based methods like Markov chain Monte Carlo (MCMC). It has been argued that MCMC may not be the best option in certain problems in which the target function (likelihood) poses local maxima or have very high dimensionality. Apart from this, there may be examples in which we are mainly interested to find the point in the parameter space at which the probability distribution has the largest value. In this situation the problem of parameter estimation becomes an optimization problem. In the present work we show that particle swarm optimization (PSO), which is an artificial intelligence inspired population based search procedure, can also be used for cosmological parameter estimation. Using PSO we were able to recover the best-fit Λ cold dark matter (LCDM) model parameters from the WMAP seven year data without using any prior guess value or any other property of the probability distribution of parameters like standard deviation, as is common in MCMC. We also report the results of an exercise in which we consider a binned primordial power spectrum (to increase the dimensionality of problem) and find that a power spectrum with features gives lower chi square than the standard power law. Since PSO does not sample the likelihood surface in a fair way, we follow a fitting procedure to find the spread of likelihood function around the best-fit point.

  9. Estimating RASATI scores using acoustical parameters

    International Nuclear Information System (INIS)

    Agüero, P D; Tulli, J C; Moscardi, G; Gonzalez, E L; Uriz, A J

    2011-01-01

    Acoustical analysis of speech using computers has reached an important development in the latest years. The subjective evaluation of a clinician is complemented with an objective measure of relevant parameters of voice. Praat, MDVP (Multi Dimensional Voice Program) and SAV (Software for Voice Analysis) are some examples of software for speech analysis. This paper describes an approach to estimate the subjective characteristics of RASATI scale given objective acoustical parameters. Two approaches were used: linear regression with non-negativity constraints, and neural networks. The experiments show that such approach gives correct evaluations with ±1 error in 80% of the cases.

  10. Simultaneous Parameters Identifiability and Estimation of an E. coli Metabolic Network Model

    Directory of Open Access Journals (Sweden)

    Kese Pontes Freitas Alberton

    2015-01-01

    Full Text Available This work proposes a procedure for simultaneous parameters identifiability and estimation in metabolic networks in order to overcome difficulties associated with lack of experimental data and large number of parameters, a common scenario in the modeling of such systems. As case study, the complex real problem of parameters identifiability of the Escherichia coli K-12 W3110 dynamic model was investigated, composed by 18 differential ordinary equations and 35 kinetic rates, containing 125 parameters. With the procedure, model fit was improved for most of the measured metabolites, achieving 58 parameters estimated, including 5 unknown initial conditions. The results indicate that simultaneous parameters identifiability and estimation approach in metabolic networks is appealing, since model fit to the most of measured metabolites was possible even when important measures of intracellular metabolites and good initial estimates of parameters are not available.

  11. Methodology for estimation of potential for solar water heating in a target area

    International Nuclear Information System (INIS)

    Pillai, Indu R.; Banerjee, Rangan

    2007-01-01

    Proper estimation of potential of any renewable energy technology is essential for planning and promotion of the technology. The methods reported in literature for estimation of potential of solar water heating in a target area are aggregate in nature. A methodology for potential estimation (technical, economic and market potential) of solar water heating in a target area is proposed in this paper. This methodology links the micro-level factors and macro-level market effects affecting the diffusion or adoption of solar water heating systems. Different sectors with end uses of low temperature hot water are considered for potential estimation. Potential is estimated at each end use point by simulation using TRNSYS taking micro-level factors. The methodology is illustrated for a synthetic area in India with an area of 2 sq. km and population of 10,000. The end use sectors considered are residential, hospitals, nursing homes and hotels. The estimated technical potential and market potential are 1700 m 2 and 350 m 2 of collector area, respectively. The annual energy savings for the technical potential in the area is estimated as 110 kW h/capita and 0.55 million-kW h/sq. km. area, with an annual average peak saving of 1 MW. The annual savings is 650-kW h per m 2 of collector area and accounts for approximately 3% of the total electricity consumption of the target area. Some of the salient features of the model are the factors considered for potential estimation; estimation of electrical usage pattern for typical day, amount of electricity savings and savings during the peak load. The framework is general and enables accurate estimation of potential of solar water heating for a city, block. Energy planners and policy makers can use this framework for tracking and promotion of diffusion of solar water heating systems. (author)

  12. Covariance Evaluation Methodology for Neutron Cross Sections

    Energy Technology Data Exchange (ETDEWEB)

    Herman,M.; Arcilla, R.; Mattoon, C.M.; Mughabghab, S.F.; Oblozinsky, P.; Pigni, M.; Pritychenko, b.; Songzoni, A.A.

    2008-09-01

    We present the NNDC-BNL methodology for estimating neutron cross section covariances in thermal, resolved resonance, unresolved resonance and fast neutron regions. The three key elements of the methodology are Atlas of Neutron Resonances, nuclear reaction code EMPIRE, and the Bayesian code implementing Kalman filter concept. The covariance data processing, visualization and distribution capabilities are integral components of the NNDC methodology. We illustrate its application on examples including relatively detailed evaluation of covariances for two individual nuclei and massive production of simple covariance estimates for 307 materials. Certain peculiarities regarding evaluation of covariances for resolved resonances and the consistency between resonance parameter uncertainties and thermal cross section uncertainties are also discussed.

  13. Kalman filter estimation of RLC parameters for UMP transmission line

    Directory of Open Access Journals (Sweden)

    Mohd Amin Siti Nur Aishah

    2018-01-01

    Full Text Available This paper present the development of Kalman filter that allows evaluation in the estimation of resistance (R, inductance (L, and capacitance (C values for Universiti Malaysia Pahang (UMP short transmission line. To overcome the weaknesses of existing system such as power losses in the transmission line, Kalman Filter can be a better solution to estimate the parameters. The aim of this paper is to estimate RLC values by using Kalman filter that in the end can increase the system efficiency in UMP. In this research, matlab simulink model is developed to analyse the UMP short transmission line by considering different noise conditions to reprint certain unknown parameters which are difficult to predict. The data is then used for comparison purposes between calculated and estimated values. The results have illustrated that the Kalman Filter estimate accurately the RLC parameters with less error. The comparison of accuracy between Kalman Filter and Least Square method is also presented to evaluate their performances.

  14. Estimating hydraulic parameters of the Açu-Brazil aquifer using the computer analysis of micrographs

    Science.gov (United States)

    de Lucena, Leandson R. F.; da Silva, Luis R. D.; Vieira, Marcela M.; Carvalho, Bruno M.; Xavier Júnior, Milton M.

    2016-04-01

    The conventional way of obtaining hydraulic parameters of aquifers is through the interpretation of aquifer tests that requires a fairly complex logistics in terms of equipment and personnel. On the other way, the processing and analysis of digital images of two-dimensional rock sample micrographs presents itself as a promising (simpler and cheaper) alternative procedure for obtaining estimates for hydraulics parameters. This methodology involves the sampling of rocks, followed by the making and imaging of thin rock samples, image segmentation, three-dimensional reconstruction and flow simulation. This methodology was applied to the outcropping portion of the Açu aquifer in the northeast of Brazil, and the computational analyses of the thin rock sections of the acquired samples produced effective porosities between 11.2% and 18.5%, and permeabilities between 52.4 mD and 1140.7 mD. Considering that the aquifer is unconfined, these effective porosity values can be used effectively as storage coefficients. The hydraulic conductivities produced by adopting different water dynamic viscosities at the temperature of 28 °C in the conversion of the permeabilities result in values in the range of [ 6.03 ×10-7, 1.43 ×10-5 ] m/s, compatible with the local hydrogeology.

  15. Iterative importance sampling algorithms for parameter estimation

    OpenAIRE

    Morzfeld, Matthias; Day, Marcus S.; Grout, Ray W.; Pau, George Shu Heng; Finsterle, Stefan A.; Bell, John B.

    2016-01-01

    In parameter estimation problems one computes a posterior distribution over uncertain parameters defined jointly by a prior distribution, a model, and noisy data. Markov Chain Monte Carlo (MCMC) is often used for the numerical solution of such problems. An alternative to MCMC is importance sampling, which can exhibit near perfect scaling with the number of cores on high performance computing systems because samples are drawn independently. However, finding a suitable proposal distribution is ...

  16. Uncertainties in the Item Parameter Estimates and Robust Automated Test Assembly

    Science.gov (United States)

    Veldkamp, Bernard P.; Matteucci, Mariagiulia; de Jong, Martijn G.

    2013-01-01

    Item response theory parameters have to be estimated, and because of the estimation process, they do have uncertainty in them. In most large-scale testing programs, the parameters are stored in item banks, and automated test assembly algorithms are applied to assemble operational test forms. These algorithms treat item parameters as fixed values,…

  17. Parameter estimation in nonlinear models for pesticide degradation

    International Nuclear Information System (INIS)

    Richter, O.; Pestemer, W.; Bunte, D.; Diekkrueger, B.

    1991-01-01

    A wide class of environmental transfer models is formulated as ordinary or partial differential equations. With the availability of fast computers, the numerical solution of large systems became feasible. The main difficulty in performing a realistic and convincing simulation of the fate of a substance in the biosphere is not the implementation of numerical techniques but rather the incomplete data basis for parameter estimation. Parameter estimation is a synonym for statistical and numerical procedures to derive reasonable numerical values for model parameters from data. The classical method is the familiar linear regression technique which dates back to the 18th century. Because it is easy to handle, linear regression has long been established as a convenient tool for analysing relationships. However, the wide use of linear regression has led to an overemphasis of linear relationships. In nature, most relationships are nonlinear and linearization often gives a poor approximation of reality. Furthermore, pure regression models are not capable to map the dynamics of a process. Therefore, realistic models involve the evolution in time (and space). This leads in a natural way to the formulation of differential equations. To establish the link between data and dynamical models, numerical advanced parameter identification methods have been developed in recent years. This paper demonstrates the application of these techniques to estimation problems in the field of pesticide dynamics. (7 refs., 5 figs., 2 tabs.)

  18. Using Mathematical Modeling Methods for Estimating Entrance Flow Heterogeneity Impact on Aviation GTE Parameters and Performances

    Directory of Open Access Journals (Sweden)

    Yu. A. Ezrokhi

    2017-01-01

    Full Text Available The paper considers methodological approaches to the mathematical models (MM of various levels, dedicated to estimate an impact of the entrance flow heterogeneity on the main parameters and performances of the aviation GTE and it units. By an example of calculation of a twin-shaft turbofan engine in cruiser mode, demonstrates engineering mathematical model capabilities to define the impact of the total pressure field distortion on engine trust and air flow parameters, and also gas dynamic stability margin of the both compressors.It is shown that the presented first level mathematical model allows us to estimate sufficiently the impact of entrance total pressure heterogeneity on the engine parameters. Here reliability of calculations is proved to be true by their comparison with the results, obtained owing to well fulfilled 2D & 3D mathematical models of the engine, which have been repeatedly identified by the results of experiments.It is shown that received results including those on decreasing values of stability margin of both compressors can be used for tentative estimates when choosing a desirable stability margin, providing steady operation of compressors and engine in an entire range of its operating modes. Carrying out a definitive testing calculation using the specialized engine MM of a higher level will not only confirm the results obtained, but also reduce their expected error with regard to the real values reached as a result of tests.

  19. Sample size methodology

    CERN Document Server

    Desu, M M

    2012-01-01

    One of the most important problems in designing an experiment or a survey is sample size determination and this book presents the currently available methodology. It includes both random sampling from standard probability distributions and from finite populations. Also discussed is sample size determination for estimating parameters in a Bayesian setting by considering the posterior distribution of the parameter and specifying the necessary requirements. The determination of the sample size is considered for ranking and selection problems as well as for the design of clinical trials. Appropria

  20. Simple method for quick estimation of aquifer hydrogeological parameters

    Science.gov (United States)

    Ma, C.; Li, Y. Y.

    2017-08-01

    Development of simple and accurate methods to determine the aquifer hydrogeological parameters was of importance for groundwater resources assessment and management. Aiming at the present issue of estimating aquifer parameters based on some data of the unsteady pumping test, a fitting function of Theis well function was proposed using fitting optimization method and then a unitary linear regression equation was established. The aquifer parameters could be obtained by solving coefficients of the regression equation. The application of the proposed method was illustrated, using two published data sets. By the error statistics and analysis on the pumping drawdown, it showed that the method proposed in this paper yielded quick and accurate estimates of the aquifer parameters. The proposed method could reliably identify the aquifer parameters from long distance observed drawdowns and early drawdowns. It was hoped that the proposed method in this paper would be helpful for practicing hydrogeologists and hydrologists.

  1. Compressive Parameter Estimation for Sparse Translation-Invariant Signals Using Polar Interpolation

    DEFF Research Database (Denmark)

    Fyhn, Karsten; Duarte, Marco F.; Jensen, Søren Holdt

    2015-01-01

    We propose new compressive parameter estimation algorithms that make use of polar interpolation to improve the estimator precision. Our work extends previous approaches involving polar interpolation for compressive parameter estimation in two aspects: (i) we extend the formulation from real non...... to attain good estimation precision and keep the computational complexity low. Our numerical experiments show that the proposed algorithms outperform existing approaches that either leverage polynomial interpolation or are based on a conversion to a frequency-estimation problem followed by a super...... interpolation increases the estimation precision....

  2. Estimation of Genetic and Phenotypic Parameters for Udder Morphology Traits in Different Dairy Sheep Genotypes

    Directory of Open Access Journals (Sweden)

    Pavol Makovický

    2017-01-01

    Full Text Available Knowledge of genetic parameters is the basis of sound livestock improvement programmes. Genetic parameters have been estimated for linear udder traits: Udder depth (UD, Cistern depth (CD, Teat position (TP, Teat size (TS and external udder measurements: Rear udder depth (RUD, Cistern depth (CDe, Teat length (TL and Teat angle (TA – 1275 linear assessments (381 ewes and 1185 external udder measurements (355 ewes were included in the analysis for each character of 9 genotypes. Nine breeds and genotypes were included in these experiments: purebred Improved Valachian (IV, Tsigai (T, Lacaune (LC ewes, and IV and T crosses with genetic portion of Lacaune and East Friesian (EF – 25 %, 50 % and 75 %. Primary data were processed using restricted maximum likelihood (REML methodology and the multiple‑trait animal model, using programs REMLF90 and VCE 4.0. High genetic correlations were found between UD and RUD (0.86, CD and CD(e (0.93, TP and TA (0.90, TS and TL (0.94. The highest heritabilities were estimated for exact measurements of TL and CD (0.35–0.39 and subjectively assessed TA and TS (0.32 – 0.33.

  3. Estimation of Parameters in Mean-Reverting Stochastic Systems

    Directory of Open Access Journals (Sweden)

    Tianhai Tian

    2014-01-01

    Full Text Available Stochastic differential equation (SDE is a very important mathematical tool to describe complex systems in which noise plays an important role. SDE models have been widely used to study the dynamic properties of various nonlinear systems in biology, engineering, finance, and economics, as well as physical sciences. Since a SDE can generate unlimited numbers of trajectories, it is difficult to estimate model parameters based on experimental observations which may represent only one trajectory of the stochastic model. Although substantial research efforts have been made to develop effective methods, it is still a challenge to infer unknown parameters in SDE models from observations that may have large variations. Using an interest rate model as a test problem, in this work we use the Bayesian inference and Markov Chain Monte Carlo method to estimate unknown parameters in SDE models.

  4. Statistical distributions applications and parameter estimates

    CERN Document Server

    Thomopoulos, Nick T

    2017-01-01

    This book gives a description of the group of statistical distributions that have ample application to studies in statistics and probability.  Understanding statistical distributions is fundamental for researchers in almost all disciplines.  The informed researcher will select the statistical distribution that best fits the data in the study at hand.  Some of the distributions are well known to the general researcher and are in use in a wide variety of ways.  Other useful distributions are less understood and are not in common use.  The book describes when and how to apply each of the distributions in research studies, with a goal to identify the distribution that best applies to the study.  The distributions are for continuous, discrete, and bivariate random variables.  In most studies, the parameter values are not known a priori, and sample data is needed to estimate parameter values.  In other scenarios, no sample data is available, and the researcher seeks some insight that allows the estimate of ...

  5. Uncertainty estimation of core safety parameters using cross-correlations of covariance matrix

    International Nuclear Information System (INIS)

    Yamamoto, Akio; Yasue, Yoshihiro; Endo, Tomohiro; Kodama, Yasuhiro; Ohoka, Yasunori; Tatsumi, Masahiro

    2013-01-01

    An uncertainty reduction method for core safety parameters, for which measurement values are not obtained, is proposed. We empirically recognize that there exist some correlations among the prediction errors of core safety parameters, e.g., a correlation between the control rod worth and the assembly relative power at corresponding position. Correlations of errors among core safety parameters are theoretically estimated using the covariance of cross sections and sensitivity coefficients of core parameters. The estimated correlations of errors among core safety parameters are verified through the direct Monte Carlo sampling method. Once the correlation of errors among core safety parameters is known, we can estimate the uncertainty of a safety parameter for which measurement value is not obtained. (author)

  6. Adaptive distributed parameter and input estimation in linear parabolic PDEs

    KAUST Repository

    Mechhoud, Sarra

    2016-01-01

    In this paper, we discuss the on-line estimation of distributed source term, diffusion, and reaction coefficients of a linear parabolic partial differential equation using both distributed and interior-point measurements. First, new sufficient identifiability conditions of the input and the parameter simultaneous estimation are stated. Then, by means of Lyapunov-based design, an adaptive estimator is derived in the infinite-dimensional framework. It consists of a state observer and gradient-based parameter and input adaptation laws. The parameter convergence depends on the plant signal richness assumption, whereas the state convergence is established using a Lyapunov approach. The results of the paper are illustrated by simulation on tokamak plasma heat transport model using simulated data.

  7. A Methodology for Estimating Large-Customer Demand Response MarketPotential

    Energy Technology Data Exchange (ETDEWEB)

    Goldman, Charles; Hopper, Nicole; Bharvirkar, Ranjit; Neenan,Bernie; Cappers,Peter

    2007-08-01

    Demand response (DR) is increasingly recognized as an essential ingredient to well-functioning electricity markets. DR market potential studies can answer questions about the amount of DR available in a given area and from which market segments. Several recent DR market potential studies have been conducted, most adapting techniques used to estimate energy-efficiency (EE) potential. In this scoping study, we: reviewed and categorized seven recent DR market potential studies; recommended a methodology for estimating DR market potential for large, non-residential utility customers that uses price elasticities to account for behavior and prices; compiled participation rates and elasticity values from six DR options offered to large customers in recent years, and demonstrated our recommended methodology with large customer market potential scenarios at an illustrative Northeastern utility. We observe that EE and DR have several important differences that argue for an elasticity approach for large-customer DR options that rely on customer-initiated response to prices, rather than the engineering approaches typical of EE potential studies. Base-case estimates suggest that offering DR options to large, non-residential customers results in 1-3% reductions in their class peak demand in response to prices or incentive payments of $500/MWh. Participation rates (i.e., enrollment in voluntary DR programs or acceptance of default hourly pricing) have the greatest influence on DR impacts of all factors studied, yet are the least well understood. Elasticity refinements to reflect the impact of enabling technologies and response at high prices provide more accurate market potential estimates, particularly when arc elasticities (rather than substitution elasticities) are estimated.

  8. Targeted estimation of nuisance parameters to obtain valid statistical inference.

    Science.gov (United States)

    van der Laan, Mark J

    2014-01-01

    In order to obtain concrete results, we focus on estimation of the treatment specific mean, controlling for all measured baseline covariates, based on observing independent and identically distributed copies of a random variable consisting of baseline covariates, a subsequently assigned binary treatment, and a final outcome. The statistical model only assumes possible restrictions on the conditional distribution of treatment, given the covariates, the so-called propensity score. Estimators of the treatment specific mean involve estimation of the propensity score and/or estimation of the conditional mean of the outcome, given the treatment and covariates. In order to make these estimators asymptotically unbiased at any data distribution in the statistical model, it is essential to use data-adaptive estimators of these nuisance parameters such as ensemble learning, and specifically super-learning. Because such estimators involve optimal trade-off of bias and variance w.r.t. the infinite dimensional nuisance parameter itself, they result in a sub-optimal bias/variance trade-off for the resulting real-valued estimator of the estimand. We demonstrate that additional targeting of the estimators of these nuisance parameters guarantees that this bias for the estimand is second order and thereby allows us to prove theorems that establish asymptotic linearity of the estimator of the treatment specific mean under regularity conditions. These insights result in novel targeted minimum loss-based estimators (TMLEs) that use ensemble learning with additional targeted bias reduction to construct estimators of the nuisance parameters. In particular, we construct collaborative TMLEs (C-TMLEs) with known influence curve allowing for statistical inference, even though these C-TMLEs involve variable selection for the propensity score based on a criterion that measures how effective the resulting fit of the propensity score is in removing bias for the estimand. As a particular special

  9. Fukushima Daiichi unit 1 uncertainty analysis--Preliminary selection of uncertain parameters and analysis methodology

    Energy Technology Data Exchange (ETDEWEB)

    Cardoni, Jeffrey N.; Kalinich, Donald A.

    2014-02-01

    Sandia National Laboratories (SNL) plans to conduct uncertainty analyses (UA) on the Fukushima Daiichi unit (1F1) plant with the MELCOR code. The model to be used was developed for a previous accident reconstruction investigation jointly sponsored by the US Department of Energy (DOE) and Nuclear Regulatory Commission (NRC). However, that study only examined a handful of various model inputs and boundary conditions, and the predictions yielded only fair agreement with plant data and current release estimates. The goal of this uncertainty study is to perform a focused evaluation of uncertainty in core melt progression behavior and its effect on key figures-of-merit (e.g., hydrogen production, vessel lower head failure, etc.). In preparation for the SNL Fukushima UA work, a scoping study has been completed to identify important core melt progression parameters for the uncertainty analysis. The study also lays out a preliminary UA methodology.

  10. Genetic parameters in a Swine Population

    Directory of Open Access Journals (Sweden)

    Dana Popa

    2010-05-01

    Full Text Available The estimation of the variance-covariance components is a very important step in animal breeding because these components are necessary for: estimation of the genetic parameters, prediction of the breeding value and design of animal breeding programs. The estimation of genetic parameters is the first step in the development of a swine breeding program, using artificial insemination. Various procedures exist for estimation of heritability. There are three major procedures used for estimating heritability: analysis of variance (ANOVA, parents-offspring regression and restricted maximum likelihood (REML. By using ANOVA methodology or regression method it is possible to obtain aberrant values of genetic parameters (negative or over unit value of heritability coefficient, for example which can not be interpreting because is out of biological limits.

  11. minimum variance estimation of yield parameters of rubber tree

    African Journals Online (AJOL)

    2013-03-01

    Mar 1, 2013 ... It is our opinion that Kalman filter is a robust estimator of the ... Kalman filter, parameter estimation, rubber clones, Chow failure test, autocorrelation, STAMP, data ...... Mills, T.C. Modelling Current Temperature Trends.

  12. Parameter Estimation of Multiple Frequency-Hopping Signals with Two Sensors

    Directory of Open Access Journals (Sweden)

    Le Zuo

    2018-04-01

    Full Text Available This paper essentially focuses on parameter estimation of multiple wideband emitting sources with time-varying frequencies, such as two-dimensional (2-D direction of arrival (DOA and signal sorting, with a low-cost circular synthetic array (CSA consisting of only two rotating sensors. Our basic idea is to decompose the received data, which is a superimposition of phase measurements from multiple sources into separated groups and separately estimate the DOA associated with each source. Motivated by joint parameter estimation, we propose to adopt the expectation maximization (EM algorithm in this paper; our method involves two steps, namely, the expectation-step (E-step and the maximization (M-step. In the E-step, the correspondence of each signal with its emitting source is found. Then, in the M-step, the maximum-likelihood (ML estimates of the DOA parameters are obtained. These two steps are iteratively and alternatively executed to jointly determine the DOAs and sort multiple signals. Closed-form DOA estimation formulae are developed by ML estimation based on phase data, which also realize an optimal estimation. Directional ambiguity is also addressed by another ML estimation method based on received complex responses. The Cramer-Rao lower bound is derived for understanding the estimation accuracy and performance comparison. The verification of the proposed method is demonstrated with simulations.

  13. Automatic smoothing parameter selection in GAMLSS with an application to centile estimation.

    Science.gov (United States)

    Rigby, Robert A; Stasinopoulos, Dimitrios M

    2014-08-01

    A method for automatic selection of the smoothing parameters in a generalised additive model for location, scale and shape (GAMLSS) model is introduced. The method uses a P-spline representation of the smoothing terms to express them as random effect terms with an internal (or local) maximum likelihood estimation on the predictor scale of each distribution parameter to estimate its smoothing parameters. This provides a fast method for estimating multiple smoothing parameters. The method is applied to centile estimation where all four parameters of a distribution for the response variable are modelled as smooth functions of a transformed explanatory variable x This allows smooth modelling of the location, scale, skewness and kurtosis parameters of the response variable distribution as functions of x. © The Author(s) 2013 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  14. Estimation of G-renewal process parameters as an ill-posed inverse problem

    International Nuclear Information System (INIS)

    Krivtsov, V.; Yevkin, O.

    2013-01-01

    Statistical estimation of G-renewal process parameters is an important estimation problem, which has been considered by many authors. We view this problem from the standpoint of a mathematically ill-posed, inverse problem (the solution is not unique and/or is sensitive to statistical error) and propose a regularization approach specifically suited to the G-renewal process. Regardless of the estimation method, the respective objective function usually involves parameters of the underlying life-time distribution and simultaneously the restoration parameter. In this paper, we propose to regularize the problem by decoupling the estimation of the aforementioned parameters. Using a simulation study, we show that the resulting estimation/extrapolation accuracy of the proposed method is considerably higher than that of the existing methods

  15. Joint Multi-Fiber NODDI Parameter Estimation and Tractography using the Unscented Information Filter

    Directory of Open Access Journals (Sweden)

    Yogesh eRathi

    2016-04-01

    Full Text Available Tracing white matter fiber bundles is an integral part of analyzing brain connectivity. An accurate estimate of the underlying tissue parameters is also paramount in several neuroscience applications. In this work, we propose to use a joint fiber model estimation and tractography algorithm that uses the NODDI (neurite orientation dispersion diffusion imaging model to estimate fiber orientation dispersion consistently and smoothly along the fiber tracts along with estimating the intracellular and extracellular volume fractions from the diffusion signal. While the NODDI model has been used in earlier works to estimate the microstructural parameters at each voxel independently, for the first time, we propose to integrate it into a tractography framework. We extend this framework to estimate the NODDI parameters for two crossing fibers, which is imperative to trace fiber bundles through crossings as well as to estimate the microstructural parameters for each fiber bundle separately. We propose to use the unscented information filter (UIF to accurately estimate the model parameters and perform tractography. The proposed approach has significant computational performance improvements as well as numerical robustness over the unscented Kalman filter (UKF. Our method not only estimates the confidence in the estimated parameters via the covariance matrix, but also provides the Fisher-information matrix of the state variables (model parameters, which can be quite useful to measure model complexity. Results from in-vivo human brain data sets demonstrate the ability of our algorithm to trace through crossing fiber regions, while estimating orientation dispersion and other biophysical model parameters in a consistent manner along the tracts.

  16. Methodology applied by IRSN for nuclear accident cost estimations in France

    International Nuclear Information System (INIS)

    2013-01-01

    This report describes the methodology used by IRSN to estimate the cost of potential nuclear accidents in France. It concerns possible accidents involving pressurized water reactors leading to radioactive releases in the environment. These accidents have been grouped in two accident families called: severe accidents and major accidents. Two model scenarios have been selected to represent each of these families. The report discusses the general methodology of nuclear accident cost estimation. The crucial point is that all cost should be considered: if not, the cost is underestimated which can lead to negative consequences for the value attributed to safety and for crisis preparation. As a result, the overall cost comprises many components: the most well-known is offsite radiological costs, but there are many others. The proposed estimates have thus required using a diversity of methods which are described in this report. Figures are presented at the end of this report. Among other things, they show that purely radiological costs only represent a non-dominant part of foreseeable economic consequences

  17. Estimating small area health-related characteristics of populations: a methodological review

    Directory of Open Access Journals (Sweden)

    Azizur Rahman

    2017-05-01

    Full Text Available Estimation of health-related characteristics at a fine local geographic level is vital for effective health promotion programmes, provision of better health services and population-specific health planning and management. Lack of a micro-dataset readily available for attributes of individuals at small areas negatively impacts the ability of local and national agencies to manage serious health issues and related risks in the community. A solution to this challenge would be to develop a method that simulates reliable small-area statistics. This paper provides a significant appraisal of the methodologies for estimating health-related characteristics of populations at geographical limited areas. Findings reveal that a range of methodologies are in use, which can be classified as three distinct set of approaches: i indirect standardisation and individual level modelling; ii multilevel statistical modelling; and iii micro-simulation modelling. Although each approach has its own strengths and weaknesses, it appears that microsimulation- based spatial models have significant robustness over the other methods and also represent a more precise means of estimating health-related population characteristics over small areas.

  18. Methodology proposal for estimation of carbon storage in urban green areas

    NARCIS (Netherlands)

    Schröder, C.; Mancosu, E.; Roerink, G.J.

    2013-01-01

    Methodology proposal for estimation of carbon storage in urban green areas; final report. Subtitle: Final report of task Task 262-5-6 "Carbon sequestration in urban green infrastructure" Project manager Marie Cugny-Seguin. Date: 15-10-2013

  19. Estimation of octanol/water partition coefficients using LSER parameters

    Science.gov (United States)

    Luehrs, Dean C.; Hickey, James P.; Godbole, Kalpana A.; Rogers, Tony N.

    1998-01-01

    The logarithms of octanol/water partition coefficients, logKow, were regressed against the linear solvation energy relationship (LSER) parameters for a training set of 981 diverse organic chemicals. The standard deviation for logKow was 0.49. The regression equation was then used to estimate logKow for a test of 146 chemicals which included pesticides and other diverse polyfunctional compounds. Thus the octanol/water partition coefficient may be estimated by LSER parameters without elaborate software but only moderate accuracy should be expected.

  20. MCMC for parameters estimation by bayesian approach

    International Nuclear Information System (INIS)

    Ait Saadi, H.; Ykhlef, F.; Guessoum, A.

    2011-01-01

    This article discusses the parameter estimation for dynamic system by a Bayesian approach associated with Markov Chain Monte Carlo methods (MCMC). The MCMC methods are powerful for approximating complex integrals, simulating joint distributions, and the estimation of marginal posterior distributions, or posterior means. The MetropolisHastings algorithm has been widely used in Bayesian inference to approximate posterior densities. Calibrating the proposal distribution is one of the main issues of MCMC simulation in order to accelerate the convergence.

  1. Estimating model parameters in nonautonomous chaotic systems using synchronization

    International Nuclear Information System (INIS)

    Yang, Xiaoli; Xu, Wei; Sun, Zhongkui

    2007-01-01

    In this Letter, a technique is addressed for estimating unknown model parameters of multivariate, in particular, nonautonomous chaotic systems from time series of state variables. This technique uses an adaptive strategy for tracking unknown parameters in addition to a linear feedback coupling for synchronizing systems, and then some general conditions, by means of the periodic version of the LaSalle invariance principle for differential equations, are analytically derived to ensure precise evaluation of unknown parameters and identical synchronization between the concerned experimental system and its corresponding receiver one. Exemplifies are presented by employing a parametrically excited 4D new oscillator and an additionally excited Ueda oscillator. The results of computer simulations reveal that the technique not only can quickly track the desired parameter values but also can rapidly respond to changes in operating parameters. In addition, the technique can be favorably robust against the effect of noise when the experimental system is corrupted by bounded disturbance and the normalized absolute error of parameter estimation grows almost linearly with the cutoff value of noise strength in simulation

  2. Adaptive distributed parameter and input estimation in linear parabolic PDEs

    KAUST Repository

    Mechhoud, Sarra

    2016-01-01

    First, new sufficient identifiability conditions of the input and the parameter simultaneous estimation are stated. Then, by means of Lyapunov-based design, an adaptive estimator is derived in the infinite-dimensional framework. It consists of a state observer and gradient-based parameter and input adaptation laws. The parameter convergence depends on the plant signal richness assumption, whereas the state convergence is established using a Lyapunov approach. The results of the paper are illustrated by simulation on tokamak plasma heat transport model using simulated data.

  3. Estimation of delays and other parameters in nonlinear functional differential equations

    Science.gov (United States)

    Banks, H. T.; Lamm, P. K. D.

    1983-01-01

    A spline-based approximation scheme for nonlinear nonautonomous delay differential equations is discussed. Convergence results (using dissipative type estimates on the underlying nonlinear operators) are given in the context of parameter estimation problems which include estimation of multiple delays and initial data as well as the usual coefficient-type parameters. A brief summary of some of the related numerical findings is also given.

  4. Forensic anthropology casework-essential methodological considerations in stature estimation.

    Science.gov (United States)

    Krishan, Kewal; Kanchan, Tanuj; Menezes, Ritesh G; Ghosh, Abhik

    2012-03-01

    The examination of skeletal remains is a challenge to the medical examiner's/coroner's office and the forensic anthropologist conducting the investigation. One of the objectives of the medico-legal investigation is to estimate stature or height from various skeletal remains and body parts brought for examination. Various skeletal remains and body parts bear a positive and linear correlation with stature and have been successfully used for stature estimation. This concept is utilized in estimation of stature in forensic anthropology casework in mass disasters and other forensic examinations. Scientists have long been involved in standardizing the anthropological data with respect to various populations of the world. This review deals with some essential methodological issues that need to be addressed in research related to estimation of stature in forensic examinations. These issues have direct relevance in the identification of commingled or unknown remains and therefore it is essential that forensic nurses are familiar with the theories and techniques used in forensic anthropology. © 2012 International Association of Forensic Nurses.

  5. Bayesian estimation of parameters in a regional hydrological model

    Directory of Open Access Journals (Sweden)

    K. Engeland

    2002-01-01

    Full Text Available This study evaluates the applicability of the distributed, process-oriented Ecomag model for prediction of daily streamflow in ungauged basins. The Ecomag model is applied as a regional model to nine catchments in the NOPEX area, using Bayesian statistics to estimate the posterior distribution of the model parameters conditioned on the observed streamflow. The distribution is calculated by Markov Chain Monte Carlo (MCMC analysis. The Bayesian method requires formulation of a likelihood function for the parameters and three alternative formulations are used. The first is a subjectively chosen objective function that describes the goodness of fit between the simulated and observed streamflow, as defined in the GLUE framework. The second and third formulations are more statistically correct likelihood models that describe the simulation errors. The full statistical likelihood model describes the simulation errors as an AR(1 process, whereas the simple model excludes the auto-regressive part. The statistical parameters depend on the catchments and the hydrological processes and the statistical and the hydrological parameters are estimated simultaneously. The results show that the simple likelihood model gives the most robust parameter estimates. The simulation error may be explained to a large extent by the catchment characteristics and climatic conditions, so it is possible to transfer knowledge about them to ungauged catchments. The statistical models for the simulation errors indicate that structural errors in the model are more important than parameter uncertainties. Keywords: regional hydrological model, model uncertainty, Bayesian analysis, Markov Chain Monte Carlo analysis

  6. Consistent Parameter and Transfer Function Estimation using Context Free Grammars

    Science.gov (United States)

    Klotz, Daniel; Herrnegger, Mathew; Schulz, Karsten

    2017-04-01

    This contribution presents a method for the inference of transfer functions for rainfall-runoff models. Here, transfer functions are defined as parametrized (functional) relationships between a set of spatial predictors (e.g. elevation, slope or soil texture) and model parameters. They are ultimately used for estimation of consistent, spatially distributed model parameters from a limited amount of lumped global parameters. Additionally, they provide a straightforward method for parameter extrapolation from one set of basins to another and can even be used to derive parameterizations for multi-scale models [see: Samaniego et al., 2010]. Yet, currently an actual knowledge of the transfer functions is often implicitly assumed. As a matter of fact, for most cases these hypothesized transfer functions can rarely be measured and often remain unknown. Therefore, this contribution presents a general method for the concurrent estimation of the structure of transfer functions and their respective (global) parameters. Note, that by consequence an estimation of the distributed parameters of the rainfall-runoff model is also undertaken. The method combines two steps to achieve this. The first generates different possible transfer functions. The second then estimates the respective global transfer function parameters. The structural estimation of the transfer functions is based on the context free grammar concept. Chomsky first introduced context free grammars in linguistics [Chomsky, 1956]. Since then, they have been widely applied in computer science. But, to the knowledge of the authors, they have so far not been used in hydrology. Therefore, the contribution gives an introduction to context free grammars and shows how they can be constructed and used for the structural inference of transfer functions. This is enabled by new methods from evolutionary computation, such as grammatical evolution [O'Neill, 2001], which make it possible to exploit the constructed grammar as a

  7. Lifetime prediction and reliability estimation methodology for Stirling-type pulse tube refrigerators by gaseous contamination accelerated degradation testing

    Science.gov (United States)

    Wan, Fubin; Tan, Yuanyuan; Jiang, Zhenhua; Chen, Xun; Wu, Yinong; Zhao, Peng

    2017-12-01

    Lifetime and reliability are the two performance parameters of premium importance for modern space Stirling-type pulse tube refrigerators (SPTRs), which are required to operate in excess of 10 years. Demonstration of these parameters provides a significant challenge. This paper proposes a lifetime prediction and reliability estimation method that utilizes accelerated degradation testing (ADT) for SPTRs related to gaseous contamination failure. The method was experimentally validated via three groups of gaseous contamination ADT. First, the performance degradation model based on mechanism of contamination failure and material outgassing characteristics of SPTRs was established. Next, a preliminary test was performed to determine whether the mechanism of contamination failure of the SPTRs during ADT is consistent with normal life testing. Subsequently, the experimental program of ADT was designed for SPTRs. Then, three groups of gaseous contamination ADT were performed at elevated ambient temperatures of 40 °C, 50 °C, and 60 °C, respectively and the estimated lifetimes of the SPTRs under normal condition were obtained through acceleration model (Arrhenius model). The results show good fitting of the degradation model with the experimental data. Finally, we obtained the reliability estimation of SPTRs through using the Weibull distribution. The proposed novel methodology enables us to take less than one year time to estimate the reliability of the SPTRs designed for more than 10 years.

  8. Parameter and state estimation in nonlinear dynamical systems

    Science.gov (United States)

    Creveling, Daniel R.

    This thesis is concerned with the problem of state and parameter estimation in nonlinear systems. The need to evaluate unknown parameters in models of nonlinear physical, biophysical and engineering systems occurs throughout the development of phenomenological or reduced models of dynamics. When verifying and validating these models, it is important to incorporate information from observations in an efficient manner. Using the idea of synchronization of nonlinear dynamical systems, this thesis develops a framework for presenting data to a candidate model of a physical process in a way that makes efficient use of the measured data while allowing for estimation of the unknown parameters in the model. The approach presented here builds on existing work that uses synchronization as a tool for parameter estimation. Some critical issues of stability in that work are addressed and a practical framework is developed for overcoming these difficulties. The central issue is the choice of coupling strength between the model and data. If the coupling is too strong, the model will reproduce the measured data regardless of the adequacy of the model or correctness of the parameters. If the coupling is too weak, nonlinearities in the dynamics could lead to complex dynamics rendering any cost function comparing the model to the data inadequate for the determination of model parameters. Two methods are introduced which seek to balance the need for coupling with the desire to allow the model to evolve in its natural manner without coupling. One method, 'balanced' synchronization, adds to the synchronization cost function a requirement that the conditional Lyapunov exponents of the model system, conditioned on being driven by the data, remain negative but small in magnitude. Another method allows the coupling between the data and the model to vary in time according to a specific form of differential equation. The coupling dynamics is damped to allow for a tendency toward zero coupling

  9. Compendium of Greenhouse Gas Emissions Estimation Methodologies for the Oil and Gas Industry

    Energy Technology Data Exchange (ETDEWEB)

    Shires, T.M.; Loughran, C.J. [URS Corporation, Austin, TX (United States)

    2004-02-01

    This document is a compendium of currently recognized methods and provides details for all oil and gas industry segments to enhance consistency in emissions estimation. This Compendium aims to accomplish the following goals: Assemble an expansive collection of relevant emission factors for estimating GHG emissions, based on currently available public documents; Outline detailed procedures for conversions between different measurement unit systems, with particular emphasis on implementation of oil and gas industry standards; Provide descriptions of the multitude of oil and gas industry operations, in its various segments, and the associated emissions sources that should be considered; and Develop emission inventory examples, based on selected facilities from the various segments, to demonstrate the broad applicability of the methodologies. The overall objective of developing this document is to promote the use of consistent, standardized methodologies for estimating GHG emissions from petroleum industry operations. The resulting Compendium documents recognized calculation techniques and emission factors for estimating GHG emissions for oil and gas industry operations. These techniques cover the calculation or estimation of emissions from the full range of industry operations - from exploration and production through refining, to the marketing and distribution of products. The Compendium presents and illustrates the use of preferred and alternative calculation approaches for carbon dioxide (CO2), methane (CH4), and nitrous oxide (N2O) emissions for all common emission sources, including combustion, vented, and fugitive. Decision trees are provided to guide the user in selecting an estimation technique based on considerations of materiality, data availability, and accuracy. API will provide (free of charge) a calculation tool based on the emission estimation methodologies described herein. The tool will be made available at http://ghg.api.org/.

  10. Estimating mutation parameters, population history and genealogy simultaneously from temporally spaced sequence data.

    Science.gov (United States)

    Drummond, Alexei J; Nicholls, Geoff K; Rodrigo, Allen G; Solomon, Wiremu

    2002-07-01

    Molecular sequences obtained at different sampling times from populations of rapidly evolving pathogens and from ancient subfossil and fossil sources are increasingly available with modern sequencing technology. Here, we present a Bayesian statistical inference approach to the joint estimation of mutation rate and population size that incorporates the uncertainty in the genealogy of such temporally spaced sequences by using Markov chain Monte Carlo (MCMC) integration. The Kingman coalescent model is used to describe the time structure of the ancestral tree. We recover information about the unknown true ancestral coalescent tree, population size, and the overall mutation rate from temporally spaced data, that is, from nucleotide sequences gathered at different times, from different individuals, in an evolving haploid population. We briefly discuss the methodological implications and show what can be inferred, in various practically relevant states of prior knowledge. We develop extensions for exponentially growing population size and joint estimation of substitution model parameters. We illustrate some of the important features of this approach on a genealogy of HIV-1 envelope (env) partial sequences.

  11. Nonlinear systems time-varying parameter estimation: Application to induction motors

    Energy Technology Data Exchange (ETDEWEB)

    Kenne, Godpromesse [Laboratoire d' Automatique et d' Informatique Appliquee (LAIA), Departement de Genie Electrique, IUT FOTSO Victor, Universite de Dschang, B.P. 134 Bandjoun (Cameroon); Ahmed-Ali, Tarek [Ecole Nationale Superieure des Ingenieurs des Etudes et Techniques d' Armement (ENSIETA), 2 Rue Francois Verny, 29806 Brest Cedex 9 (France); Lamnabhi-Lagarrigue, F. [Laboratoire des Signaux et Systemes (L2S), C.N.R.S-SUPELEC, Universite Paris XI, 3 Rue Joliot Curie, 91192 Gif-sur-Yvette (France); Arzande, Amir [Departement Energie, Ecole Superieure d' Electricite-SUPELEC, 3 Rue Joliot Curie, 91192 Gif-sur-Yvette (France)

    2008-11-15

    In this paper, an algorithm for time-varying parameter estimation for a large class of nonlinear systems is presented. The proof of the convergence of the estimates to their true values is achieved using Lyapunov theories and does not require that the classical persistent excitation condition be satisfied by the input signal. Since the induction motor (IM) is widely used in several industrial sectors, the algorithm developed is potentially useful for adjusting the controller parameters of variable speed drives. The method proposed is simple and easily implementable in real-time. The application of this approach to on-line estimation of the rotor resistance of IM shows a rapidly converging estimate in spite of measurement noise, discretization effects, parameter uncertainties (e.g. inaccuracies on motor inductance values) and modeling inaccuracies. The robustness analysis for this IM application also revealed that the proposed scheme is insensitive to the stator resistance variations within a wide range. The merits of the proposed algorithm in the case of on-line time-varying rotor resistance estimation are demonstrated via experimental results in various operating conditions of the induction motor. The experimental results obtained demonstrate that the application of the proposed algorithm to update on-line the parameters of an adaptive controller (e.g. IM and synchronous machines adaptive control) can improve the efficiency of the industrial process. The other interesting features of the proposed method include fault detection/estimation and adaptive control of IM and synchronous machines. (author)

  12. Estimation of Compaction Parameters Based on Soil Classification

    Science.gov (United States)

    Lubis, A. S.; Muis, Z. A.; Hastuty, I. P.; Siregar, I. M.

    2018-02-01

    Factors that must be considered in compaction of the soil works were the type of soil material, field control, maintenance and availability of funds. Those problems then raised the idea of how to estimate the density of the soil with a proper implementation system, fast, and economical. This study aims to estimate the compaction parameter i.e. the maximum dry unit weight (γ dmax) and optimum water content (Wopt) based on soil classification. Each of 30 samples were being tested for its properties index and compaction test. All of the data’s from the laboratory test results, were used to estimate the compaction parameter values by using linear regression and Goswami Model. From the research result, the soil types were A4, A-6, and A-7 according to AASHTO and SC, SC-SM, and CL based on USCS. By linear regression, the equation for estimation of the maximum dry unit weight (γdmax *)=1,862-0,005*FINES- 0,003*LL and estimation of the optimum water content (wopt *)=- 0,607+0,362*FINES+0,161*LL. By Goswami Model (with equation Y=mLogG+k), for estimation of the maximum dry unit weight (γdmax *) with m=-0,376 and k=2,482, for estimation of the optimum water content (wopt *) with m=21,265 and k=-32,421. For both of these equations a 95% confidence interval was obtained.

  13. Probabilistic estimation of the constitutive parameters of polymers

    Directory of Open Access Journals (Sweden)

    Siviour C.R.

    2012-08-01

    Full Text Available The Mulliken-Boyce constitutive model predicts the dynamic response of crystalline polymers as a function of strain rate and temperature. This paper describes the Mulliken-Boyce model-based estimation of the constitutive parameters in a Bayesian probabilistic framework. Experimental data from dynamic mechanical analysis and dynamic compression of PVC samples over a wide range of strain rates are analyzed. Both experimental uncertainty and natural variations in the material properties are simultaneously considered as independent and joint distributions; the posterior probability distributions are shown and compared with prior estimates of the material constitutive parameters. Additionally, particular statistical distributions are shown to be effective at capturing the rate and temperature dependence of internal phase transitions in DMA data.

  14. Estimating 3D Object Parameters from 2D Grey-Level Images

    NARCIS (Netherlands)

    Houkes, Z.

    2000-01-01

    This thesis describes a general framework for parameter estimation, which is suitable for computer vision applications. The approach described combines 3D modelling, animation and estimation tools to determine parameters of objects in a scene from 2D grey-level images. The animation tool predicts

  15. Estimations of parameters in Pareto reliability model in the presence of masked data

    International Nuclear Information System (INIS)

    Sarhan, Ammar M.

    2003-01-01

    Estimations of parameters included in the individual distributions of the life times of system components in a series system are considered in this paper based on masked system life test data. We consider a series system of two independent components each has a Pareto distributed lifetime. The maximum likelihood and Bayes estimators for the parameters and the values of the reliability of the system's components at a specific time are obtained. Symmetrical triangular prior distributions are assumed for the unknown parameters to be estimated in obtaining the Bayes estimators of these parameters. Large simulation studies are done in order: (i) explain how one can utilize the theoretical results obtained; (ii) compare the maximum likelihood and Bayes estimates obtained of the underlying parameters; and (iii) study the influence of the masking level and the sample size on the accuracy of the estimates obtained

  16. Quasi-Newton methods for parameter estimation in functional differential equations

    Science.gov (United States)

    Brewer, Dennis W.

    1988-01-01

    A state-space approach to parameter estimation in linear functional differential equations is developed using the theory of linear evolution equations. A locally convergent quasi-Newton type algorithm is applied to distributed systems with particular emphasis on parameters that induce unbounded perturbations of the state. The algorithm is computationally implemented on several functional differential equations, including coefficient and delay estimation in linear delay-differential equations.

  17. Heuristic Methodology for Estimating the Liquid Biofuel Potential of a Region

    Directory of Open Access Journals (Sweden)

    Dorel Dusmanescu

    2016-08-01

    Full Text Available This paper presents a heuristic methodology for estimating the possible variation of the liquid biofuel potential of a region, an appraisal made for a future period of time. The determination of the liquid biofuel potential has been made up either on the account of an average (constant yield of the energetic crops that were used, or on the account of a yield that varies depending on a known trend, which can be estimated through a certain method. The proposed methodology uses the variation of the yield of energetic crops over time in order to simulate a variation of the biofuel potential for a future ten year time period. This new approach to the problem of determining the liquid biofuel potential of a certain land area can be useful for investors, as it allows making a more realistic analysis of the investment risk and of the possibilities of recovering the investment. On the other hand, the presented methodology can be useful to the governmental administration in order to elaborate strategies and policies to ensure the necessity of fuels and liquid biofuels for transportation, in a certain area. Unlike current methods, which approach the problem of determining the liquid biofuel potential in a deterministic way, by using econometric methods, the proposed methodology uses heuristic reasoning schemes in order to reduce the great number of factors that actually influence the biofuel potential and which usually have unknown values.

  18. Estimating Arrhenius parameters using temperature programmed molecular dynamics

    International Nuclear Information System (INIS)

    Imandi, Venkataramana; Chatterjee, Abhijit

    2016-01-01

    Kinetic rates at different temperatures and the associated Arrhenius parameters, whenever Arrhenius law is obeyed, are efficiently estimated by applying maximum likelihood analysis to waiting times collected using the temperature programmed molecular dynamics method. When transitions involving many activated pathways are available in the dataset, their rates may be calculated using the same collection of waiting times. Arrhenius behaviour is ascertained by comparing rates at the sampled temperatures with ones from the Arrhenius expression. Three prototype systems with corrugated energy landscapes, namely, solvated alanine dipeptide, diffusion at the metal-solvent interphase, and lithium diffusion in silicon, are studied to highlight various aspects of the method. The method becomes particularly appealing when the Arrhenius parameters can be used to find rates at low temperatures where transitions are rare. Systematic coarse-graining of states can further extend the time scales accessible to the method. Good estimates for the rate parameters are obtained with 500-1000 waiting times.

  19. Estimating Arrhenius parameters using temperature programmed molecular dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Imandi, Venkataramana; Chatterjee, Abhijit, E-mail: abhijit@che.iitb.ac.in [Department of Chemical Engineering, Indian Institute of Technology Bombay, Mumbai 400076 (India)

    2016-07-21

    Kinetic rates at different temperatures and the associated Arrhenius parameters, whenever Arrhenius law is obeyed, are efficiently estimated by applying maximum likelihood analysis to waiting times collected using the temperature programmed molecular dynamics method. When transitions involving many activated pathways are available in the dataset, their rates may be calculated using the same collection of waiting times. Arrhenius behaviour is ascertained by comparing rates at the sampled temperatures with ones from the Arrhenius expression. Three prototype systems with corrugated energy landscapes, namely, solvated alanine dipeptide, diffusion at the metal-solvent interphase, and lithium diffusion in silicon, are studied to highlight various aspects of the method. The method becomes particularly appealing when the Arrhenius parameters can be used to find rates at low temperatures where transitions are rare. Systematic coarse-graining of states can further extend the time scales accessible to the method. Good estimates for the rate parameters are obtained with 500-1000 waiting times.

  20. Using linear time-invariant system theory to estimate kinetic parameters directly from projection measurements

    International Nuclear Information System (INIS)

    Zeng, G.L.; Gullberg, G.T.

    1995-01-01

    It is common practice to estimate kinetic parameters from dynamically acquired tomographic data by first reconstructing a dynamic sequence of three-dimensional reconstructions and then fitting the parameters to time activity curves generated from the time-varying reconstructed images. However, in SPECT, the pharmaceutical distribution can change during the acquisition of a complete tomographic data set, which can bias the estimated kinetic parameters. It is hypothesized that more accurate estimates of the kinetic parameters can be obtained by fitting to the projection measurements instead of the reconstructed time sequence. Estimation from projections requires the knowledge of their relationship between the tissue regions of interest or voxels with particular kinetic parameters and the project measurements, which results in a complicated nonlinear estimation problem with a series of exponential factors with multiplicative coefficients. A technique is presented in this paper where the exponential decay parameters are estimated separately using linear time-invariant system theory. Once the exponential factors are known, the coefficients of the exponentials can be estimated using linear estimation techniques. Computer simulations demonstrate that estimation of the kinetic parameters directly from the projections is more accurate than the estimation from the reconstructed images

  1. Parameter Estimation of Damped Compound Pendulum Differential Evolution Algorithm

    Directory of Open Access Journals (Sweden)

    Saad Mohd Sazli

    2016-01-01

    Full Text Available This paper present the parameter identification of damped compound pendulum using differential evolution algorithm. The procedure used to achieve the parameter identification of the experimental system consisted of input output data collection, ARX model order selection and parameter estimation using conventional method least square (LS and differential evolution (DE algorithm. PRBS signal is used to be input signal to regulate the motor speed. Whereas, the output signal is taken from position sensor. Both, input and output data is used to estimate the parameter of the ARX model. The residual error between the actual and predicted output responses of the models is validated using mean squares error (MSE. Analysis showed that, MSE value for LS is 0.0026 and MSE value for DE is 3.6601×10-5. Based results obtained, it was found that DE have lower MSE than the LS method.

  2. Repository environmental parameters and models/methodologies relevant to assessing the performance of high-level waste packages in basalt, tuff, and salt

    Energy Technology Data Exchange (ETDEWEB)

    Claiborne, H.C.; Croff, A.G.; Griess, J.C.; Smith, F.J.

    1987-09-01

    This document provides specifications for models/methodologies that could be employed in determining postclosure repository environmental parameters relevant to the performance of high-level waste packages for the Basalt Waste Isolation Project (BWIP) at Richland, Washington, the tuff at Yucca Mountain by the Nevada Test Site, and the bedded salt in Deaf Smith County, Texas. Guidance is provided on the identify of the relevant repository environmental parameters; the models/methodologies employed to determine the parameters, and the input data base for the models/methodologies. Supporting studies included are an analysis of potential waste package failure modes leading to identification of the relevant repository environmental parameters, an evaluation of the credible range of the repository environmental parameters, and a summary of the review of existing models/methodologies currently employed in determining repository environmental parameters relevant to waste package performance. 327 refs., 26 figs., 19 tabs.

  3. Repository environmental parameters and models/methodologies relevant to assessing the performance of high-level waste packages in basalt, tuff, and salt

    International Nuclear Information System (INIS)

    Claiborne, H.C.; Croff, A.G.; Griess, J.C.; Smith, F.J.

    1987-09-01

    This document provides specifications for models/methodologies that could be employed in determining postclosure repository environmental parameters relevant to the performance of high-level waste packages for the Basalt Waste Isolation Project (BWIP) at Richland, Washington, the tuff at Yucca Mountain by the Nevada Test Site, and the bedded salt in Deaf Smith County, Texas. Guidance is provided on the identify of the relevant repository environmental parameters; the models/methodologies employed to determine the parameters, and the input data base for the models/methodologies. Supporting studies included are an analysis of potential waste package failure modes leading to identification of the relevant repository environmental parameters, an evaluation of the credible range of the repository environmental parameters, and a summary of the review of existing models/methodologies currently employed in determining repository environmental parameters relevant to waste package performance. 327 refs., 26 figs., 19 tabs

  4. Improved Battery Parameter Estimation Method Considering Operating Scenarios for HEV/EV Applications

    Directory of Open Access Journals (Sweden)

    Jufeng Yang

    2016-12-01

    Full Text Available This paper presents an improved battery parameter estimation method based on typical operating scenarios in hybrid electric vehicles and pure electric vehicles. Compared with the conventional estimation methods, the proposed method takes both the constant-current charging and the dynamic driving scenarios into account, and two separate sets of model parameters are estimated through different parts of the pulse-rest test. The model parameters for the constant-charging scenario are estimated from the data in the pulse-charging periods, while the model parameters for the dynamic driving scenario are estimated from the data in the rest periods, and the length of the fitted dataset is determined by the spectrum analysis of the load current. In addition, the unsaturated phenomenon caused by the long-term resistor-capacitor (RC network is analyzed, and the initial voltage expressions of the RC networks in the fitting functions are improved to ensure a higher model fidelity. Simulation and experiment results validated the feasibility of the developed estimation method.

  5. Parameters Estimation of Geographically Weighted Ordinal Logistic Regression (GWOLR) Model

    Science.gov (United States)

    Zuhdi, Shaifudin; Retno Sari Saputro, Dewi; Widyaningsih, Purnami

    2017-06-01

    A regression model is the representation of relationship between independent variable and dependent variable. The dependent variable has categories used in the logistic regression model to calculate odds on. The logistic regression model for dependent variable has levels in the logistics regression model is ordinal. GWOLR model is an ordinal logistic regression model influenced the geographical location of the observation site. Parameters estimation in the model needed to determine the value of a population based on sample. The purpose of this research is to parameters estimation of GWOLR model using R software. Parameter estimation uses the data amount of dengue fever patients in Semarang City. Observation units used are 144 villages in Semarang City. The results of research get GWOLR model locally for each village and to know probability of number dengue fever patient categories.

  6. Circuit realization, chaos synchronization and estimation of parameters of a hyperchaotic system with unknown parameters

    Directory of Open Access Journals (Sweden)

    A. Elsonbaty

    2014-10-01

    Full Text Available In this article, the adaptive chaos synchronization technique is implemented by an electronic circuit and applied to the hyperchaotic system proposed by Chen et al. We consider the more realistic and practical case where all the parameters of the master system are unknowns. We propose and implement an electronic circuit that performs the estimation of the unknown parameters and the updating of the parameters of the slave system automatically, and hence it achieves the synchronization. To the best of our knowledge, this is the first attempt to implement a circuit that estimates the values of the unknown parameters of chaotic system and achieves synchronization. The proposed circuit has a variety of suitable real applications related to chaos encryption and cryptography. The outputs of the implemented circuits and numerical simulation results are shown to view the performance of the synchronized system and the proposed circuit.

  7. Comparison of sampling techniques for Bayesian parameter estimation

    Science.gov (United States)

    Allison, Rupert; Dunkley, Joanna

    2014-02-01

    The posterior probability distribution for a set of model parameters encodes all that the data have to tell us in the context of a given model; it is the fundamental quantity for Bayesian parameter estimation. In order to infer the posterior probability distribution we have to decide how to explore parameter space. Here we compare three prescriptions for how parameter space is navigated, discussing their relative merits. We consider Metropolis-Hasting sampling, nested sampling and affine-invariant ensemble Markov chain Monte Carlo (MCMC) sampling. We focus on their performance on toy-model Gaussian likelihoods and on a real-world cosmological data set. We outline the sampling algorithms themselves and elaborate on performance diagnostics such as convergence time, scope for parallelization, dimensional scaling, requisite tunings and suitability for non-Gaussian distributions. We find that nested sampling delivers high-fidelity estimates for posterior statistics at low computational cost, and should be adopted in favour of Metropolis-Hastings in many cases. Affine-invariant MCMC is competitive when computing clusters can be utilized for massive parallelization. Affine-invariant MCMC and existing extensions to nested sampling naturally probe multimodal and curving distributions.

  8. Methodological Framework for Estimating the Correlation Dimension in HRV Signals

    Directory of Open Access Journals (Sweden)

    Juan Bolea

    2014-01-01

    Full Text Available This paper presents a methodological framework for robust estimation of the correlation dimension in HRV signals. It includes (i a fast algorithm for on-line computation of correlation sums; (ii log-log curves fitting to a sigmoidal function for robust maximum slope estimation discarding the estimation according to fitting requirements; (iii three different approaches for linear region slope estimation based on latter point; and (iv exponential fitting for robust estimation of saturation level of slope series with increasing embedded dimension to finally obtain the correlation dimension estimate. Each approach for slope estimation leads to a correlation dimension estimate, called D^2, D^2⊥, and D^2max. D^2 and D^2max estimate the theoretical value of correlation dimension for the Lorenz attractor with relative error of 4%, and D^2⊥ with 1%. The three approaches are applied to HRV signals of pregnant women before spinal anesthesia for cesarean delivery in order to identify patients at risk for hypotension. D^2 keeps the 81% of accuracy previously described in the literature while D^2⊥ and D^2max approaches reach 91% of accuracy in the same database.

  9. PARAMETER ESTIMATION AND MODEL SELECTION FOR INDOOR ENVIRONMENTS BASED ON SPARSE OBSERVATIONS

    Directory of Open Access Journals (Sweden)

    Y. Dehbi

    2017-09-01

    Full Text Available This paper presents a novel method for the parameter estimation and model selection for the reconstruction of indoor environments based on sparse observations. While most approaches for the reconstruction of indoor models rely on dense observations, we predict scenes of the interior with high accuracy in the absence of indoor measurements. We use a model-based top-down approach and incorporate strong but profound prior knowledge. The latter includes probability density functions for model parameters and sparse observations such as room areas and the building footprint. The floorplan model is characterized by linear and bi-linear relations with discrete and continuous parameters. We focus on the stochastic estimation of model parameters based on a topological model derived by combinatorial reasoning in a first step. A Gauss-Markov model is applied for estimation and simulation of the model parameters. Symmetries are represented and exploited during the estimation process. Background knowledge as well as observations are incorporated in a maximum likelihood estimation and model selection is performed with AIC/BIC. The likelihood is also used for the detection and correction of potential errors in the topological model. Estimation results are presented and discussed.

  10. Parameter Estimation and Model Selection for Indoor Environments Based on Sparse Observations

    Science.gov (United States)

    Dehbi, Y.; Loch-Dehbi, S.; Plümer, L.

    2017-09-01

    This paper presents a novel method for the parameter estimation and model selection for the reconstruction of indoor environments based on sparse observations. While most approaches for the reconstruction of indoor models rely on dense observations, we predict scenes of the interior with high accuracy in the absence of indoor measurements. We use a model-based top-down approach and incorporate strong but profound prior knowledge. The latter includes probability density functions for model parameters and sparse observations such as room areas and the building footprint. The floorplan model is characterized by linear and bi-linear relations with discrete and continuous parameters. We focus on the stochastic estimation of model parameters based on a topological model derived by combinatorial reasoning in a first step. A Gauss-Markov model is applied for estimation and simulation of the model parameters. Symmetries are represented and exploited during the estimation process. Background knowledge as well as observations are incorporated in a maximum likelihood estimation and model selection is performed with AIC/BIC. The likelihood is also used for the detection and correction of potential errors in the topological model. Estimation results are presented and discussed.

  11. A practical approach to parameter estimation applied to model predicting heart rate regulation

    DEFF Research Database (Denmark)

    Olufsen, Mette; Ottesen, Johnny T.

    2013-01-01

    Mathematical models have long been used for prediction of dynamics in biological systems. Recently, several efforts have been made to render these models patient specific. One way to do so is to employ techniques to estimate parameters that enable model based prediction of observed quantities....... Knowledge of variation in parameters within and between groups of subjects have potential to provide insight into biological function. Often it is not possible to estimate all parameters in a given model, in particular if the model is complex and the data is sparse. However, it may be possible to estimate...... a subset of model parameters reducing the complexity of the problem. In this study, we compare three methods that allow identification of parameter subsets that can be estimated given a model and a set of data. These methods will be used to estimate patient specific parameters in a model predicting...

  12. Development of Cost Estimation Methodology of Decommissioning for PWR

    International Nuclear Information System (INIS)

    Lee, Sang Il; Yoo, Yeon Jae; Lim, Yong Kyu; Chang, Hyeon Sik; Song, Geun Ho

    2013-01-01

    The permanent closure of nuclear power plant should be conducted with the strict laws and the profound planning including the cost and schedule estimation because the plant is very contaminated with the radioactivity. In Korea, there are two types of the nuclear power plant. One is the pressurized light water reactor (PWR) and the other is the pressurized heavy water reactor (PHWR) called as CANDU reactor. Also, the 50% of the operating nuclear power plant in Korea is the PWRs which were originally designed by CE (Combustion Engineering). There have been experiences about the decommissioning of Westinghouse type PWR, but are few experiences on that of CE type PWR. Therefore, the purpose of this paper is to develop the cost estimation methodology and evaluate technical level of decommissioning for the application to CE type PWR based on the system engineering technology. The aim of present study is to develop the cost estimation methodology of decommissioning for application to PWR. Through the study, the following conclusions are obtained: · Based on the system engineering, the decommissioning work can be classified as Set, Subset, Task, Subtask and Work cost units. · The Set and Task structure are grouped as 29 Sets and 15 Task s, respectively. · The final result shows the cost and project schedule for the project control and risk management. · The present results are preliminary and should be refined and improved based on the modeling and cost data reflecting available technology and current costs like labor and waste data

  13. Estimating the Entropy of Binary Time Series: Methodology, Some Theory and a Simulation Study

    Directory of Open Access Journals (Sweden)

    Elie Bienenstock

    2008-06-01

    Full Text Available Partly motivated by entropy-estimation problems in neuroscience, we present a detailed and extensive comparison between some of the most popular and effective entropy estimation methods used in practice: The plug-in method, four different estimators based on the Lempel-Ziv (LZ family of data compression algorithms, an estimator based on the Context-Tree Weighting (CTW method, and the renewal entropy estimator. METHODOLOGY: Three new entropy estimators are introduced; two new LZ-based estimators, and the “renewal entropy estimator,” which is tailored to data generated by a binary renewal process. For two of the four LZ-based estimators, a bootstrap procedure is described for evaluating their standard error, and a practical rule of thumb is heuristically derived for selecting the values of their parameters in practice. THEORY: We prove that, unlike their earlier versions, the two new LZ-based estimators are universally consistent, that is, they converge to the entropy rate for every finite-valued, stationary and ergodic process. An effective method is derived for the accurate approximation of the entropy rate of a finite-state hidden Markov model (HMM with known distribution. Heuristic calculations are presented and approximate formulas are derived for evaluating the bias and the standard error of each estimator. SIMULATION: All estimators are applied to a wide range of data generated by numerous different processes with varying degrees of dependence and memory. The main conclusions drawn from these experiments include: (i For all estimators considered, the main source of error is the bias. (ii The CTW method is repeatedly and consistently seen to provide the most accurate results. (iii The performance of the LZ-based estimators is often comparable to that of the plug-in method. (iv The main drawback of the plug-in method is its computational inefficiency; with small word-lengths it fails to detect longer-range structure in

  14. Models for estimating photosynthesis parameters from in situ production profiles

    Science.gov (United States)

    Kovač, Žarko; Platt, Trevor; Sathyendranath, Shubha; Antunović, Suzana

    2017-12-01

    The rate of carbon assimilation in phytoplankton primary production models is mathematically prescribed with photosynthesis irradiance functions, which convert a light flux (energy) into a material flux (carbon). Information on this rate is contained in photosynthesis parameters: the initial slope and the assimilation number. The exactness of parameter values is crucial for precise calculation of primary production. Here we use a model of the daily production profile based on a suite of photosynthesis irradiance functions and extract photosynthesis parameters from in situ measured daily production profiles at the Hawaii Ocean Time-series station Aloha. For each function we recover parameter values, establish parameter distributions and quantify model skill. We observe that the choice of the photosynthesis irradiance function to estimate the photosynthesis parameters affects the magnitudes of parameter values as recovered from in situ profiles. We also tackle the problem of parameter exchange amongst the models and the effect it has on model performance. All models displayed little or no bias prior to parameter exchange, but significant bias following parameter exchange. The best model performance resulted from using optimal parameter values. Model formulation was extended further by accounting for spectral effects and deriving a spectral analytical solution for the daily production profile. The daily production profile was also formulated with time dependent growing biomass governed by a growth equation. The work on parameter recovery was further extended by exploring how to extract photosynthesis parameters from information on watercolumn production. It was demonstrated how to estimate parameter values based on a linearization of the full analytical solution for normalized watercolumn production and from the solution itself, without linearization. The paper complements previous works on photosynthesis irradiance models by analysing the skill and consistency of

  15. Tsunami Prediction and Earthquake Parameters Estimation in the Red Sea

    KAUST Repository

    Sawlan, Zaid A

    2012-12-01

    Tsunami concerns have increased in the world after the 2004 Indian Ocean tsunami and the 2011 Tohoku tsunami. Consequently, tsunami models have been developed rapidly in the last few years. One of the advanced tsunami models is the GeoClaw tsunami model introduced by LeVeque (2011). This model is adaptive and consistent. Because of different sources of uncertainties in the model, observations are needed to improve model prediction through a data assimilation framework. Model inputs are earthquake parameters and topography. This thesis introduces a real-time tsunami forecasting method that combines tsunami model with observations using a hybrid ensemble Kalman filter and ensemble Kalman smoother. The filter is used for state prediction while the smoother operates smoothing to estimate the earthquake parameters. This method reduces the error produced by uncertain inputs. In addition, state-parameter EnKF is implemented to estimate earthquake parameters. Although number of observations is small, estimated parameters generates a better tsunami prediction than the model. Methods and results of prediction experiments in the Red Sea are presented and the prospect of developing an operational tsunami prediction system in the Red Sea is discussed.

  16. Estimating Parameters in Physical Models through Bayesian Inversion: A Complete Example

    KAUST Repository

    Allmaras, Moritz

    2013-02-07

    All mathematical models of real-world phenomena contain parameters that need to be estimated from measurements, either for realistic predictions or simply to understand the characteristics of the model. Bayesian statistics provides a framework for parameter estimation in which uncertainties about models and measurements are translated into uncertainties in estimates of parameters. This paper provides a simple, step-by-step example-starting from a physical experiment and going through all of the mathematics-to explain the use of Bayesian techniques for estimating the coefficients of gravity and air friction in the equations describing a falling body. In the experiment we dropped an object from a known height and recorded the free fall using a video camera. The video recording was analyzed frame by frame to obtain the distance the body had fallen as a function of time, including measures of uncertainty in our data that we describe as probability densities. We explain the decisions behind the various choices of probability distributions and relate them to observed phenomena. Our measured data are then combined with a mathematical model of a falling body to obtain probability densities on the space of parameters we seek to estimate. We interpret these results and discuss sources of errors in our estimation procedure. © 2013 Society for Industrial and Applied Mathematics.

  17. PWR system simulation and parameter estimation with neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Akkurt, Hatice; Colak, Uener E-mail: uc@nuke.hacettepe.edu.tr

    2002-11-01

    A detailed nonlinear model for a typical PWR system has been considered for the development of simulation software. Each component in the system has been represented by appropriate differential equations. The SCILAB software was used for solving nonlinear equations to simulate steady-state and transient operational conditions. Overall system has been constructed by connecting individual components to each other. The validity of models for individual components and overall system has been verified. The system response against given transients have been analyzed. A neural network has been utilized to estimate system parameters during transients. Different transients have been imposed in training and prediction stages with neural networks. Reactor power and system reactivity during the transient event have been predicted by the neural network. Results show that neural networks estimations are in good agreement with the calculated response of the reactor system. The maximum errors are within {+-}0.254% for power and between -0.146 and 0.353% for reactivity prediction cases. Steam generator parameters, pressure and water level, are also successfully predicted by the neural network employed in this study. The noise imposed on the input parameters of the neural network deteriorates the power estimation capability whereas the reactivity estimation capability is not significantly affected.

  18. PWR system simulation and parameter estimation with neural networks

    International Nuclear Information System (INIS)

    Akkurt, Hatice; Colak, Uener

    2002-01-01

    A detailed nonlinear model for a typical PWR system has been considered for the development of simulation software. Each component in the system has been represented by appropriate differential equations. The SCILAB software was used for solving nonlinear equations to simulate steady-state and transient operational conditions. Overall system has been constructed by connecting individual components to each other. The validity of models for individual components and overall system has been verified. The system response against given transients have been analyzed. A neural network has been utilized to estimate system parameters during transients. Different transients have been imposed in training and prediction stages with neural networks. Reactor power and system reactivity during the transient event have been predicted by the neural network. Results show that neural networks estimations are in good agreement with the calculated response of the reactor system. The maximum errors are within ±0.254% for power and between -0.146 and 0.353% for reactivity prediction cases. Steam generator parameters, pressure and water level, are also successfully predicted by the neural network employed in this study. The noise imposed on the input parameters of the neural network deteriorates the power estimation capability whereas the reactivity estimation capability is not significantly affected

  19. Estimation of Ordinary Differential Equation Parameters Using Constrained Local Polynomial Regression.

    Science.gov (United States)

    Ding, A Adam; Wu, Hulin

    2014-10-01

    We propose a new method to use a constrained local polynomial regression to estimate the unknown parameters in ordinary differential equation models with a goal of improving the smoothing-based two-stage pseudo-least squares estimate. The equation constraints are derived from the differential equation model and are incorporated into the local polynomial regression in order to estimate the unknown parameters in the differential equation model. We also derive the asymptotic bias and variance of the proposed estimator. Our simulation studies show that our new estimator is clearly better than the pseudo-least squares estimator in estimation accuracy with a small price of computational cost. An application example on immune cell kinetics and trafficking for influenza infection further illustrates the benefits of the proposed new method.

  20. Nonparametric estimation of location and scale parameters

    KAUST Repository

    Potgieter, C.J.; Lombard, F.

    2012-01-01

    Two random variables X and Y belong to the same location-scale family if there are constants μ and σ such that Y and μ+σX have the same distribution. In this paper we consider non-parametric estimation of the parameters μ and σ under minimal

  1. Sensor Placement for Modal Parameter Subset Estimation

    DEFF Research Database (Denmark)

    Ulriksen, Martin Dalgaard; Bernal, Dionisio; Damkilde, Lars

    2016-01-01

    The present paper proposes an approach for deciding on sensor placements in the context of modal parameter estimation from vibration measurements. The approach is based on placing sensors, of which the amount is determined a priori, such that the minimum Fisher information that the frequency resp...

  2. Research on Radar Micro-Doppler Feature Parameter Estimation of Propeller Aircraft

    Science.gov (United States)

    He, Zhihua; Tao, Feixiang; Duan, Jia; Luo, Jingsheng

    2018-01-01

    The micro-motion modulation effect of the rotated propellers to radar echo can be a steady feature for aircraft target recognition. Thus, micro-Doppler feature parameter estimation is a key to accurate target recognition. In this paper, the radar echo of rotated propellers is modelled and simulated. Based on which, the distribution characteristics of the micro-motion modulation energy in time, frequency and time-frequency domain are analyzed. The micro-motion modulation energy produced by the scattering points of rotating propellers is accumulated using the Inverse-Radon (I-Radon) transform, which can be used to accomplish the estimation of micro-modulation parameter. Finally, it is proved that the proposed parameter estimation method is effective with measured data. The micro-motion parameters of aircraft can be used as the features of radar target recognition.

  3. Life cycle assessment in green chemistry: overview of key parameters and methodological concerns

    DEFF Research Database (Denmark)

    Tufvesson, Linda M.; Tufvesson, Pär; Woodley, John

    2013-01-01

    assessment (LCA) is a valuable methodology. However, on the planning stage, a full-scale LCA is considered to be too time consuming and complicated. Two reasons for this have been recognised, the method is too comprehensive and it is hard to find inventory data. In this review, key parameters are presented...... with the purpose to reduce the time-consuming steps in LCA.In this review, several LCAs of so-called ‘green chemicals’ are analysed and key parameters and methodological concerns are identified. Further, some conclusions on the environmental performance of chemicals were drawn.For fossil-based platform chemicals...... chemicals was identified. The environmental performance of bulk chemicals are closely connected to the production of the raw material and thereby different land use aspects. Here, a lot can be learnt from biofuel LCAs. In many of the reviewed articles focusing on bulk chemicals a comparison regarding fossil...

  4. Review of the Palisades pressure vessel accumulated fluence estimate and of the least squares methodology employed

    International Nuclear Information System (INIS)

    Griffin, P.J.

    1998-05-01

    This report provides a review of the Palisades submittal to the Nuclear Regulatory Commission requesting endorsement of their accumulated neutron fluence estimates based on a least squares adjustment methodology. This review highlights some minor issues in the applied methodology and provides some recommendations for future work. The overall conclusion is that the Palisades fluence estimation methodology provides a reasonable approach to a open-quotes best estimateclose quotes of the accumulated pressure vessel neutron fluence and is consistent with the state-of-the-art analysis as detailed in community consensus ASTM standards

  5. An inventory of nitrous oxide emissions from agriculture in the UK using the IPCC methodology: emission estimate, uncertainty and sensitivity analysis

    Science.gov (United States)

    Brown, L.; Armstrong Brown, S.; Jarvis, S. C.; Syed, B.; Goulding, K. W. T.; Phillips, V. R.; Sneath, R. W.; Pain, B. F.

    Nitrous oxide emission from UK agriculture was estimated, using the IPCC default values of all emission factors and parameters, to be 87 Gg N 2O-N in both 1990 and 1995. This estimate was shown, however, to have an overall uncertainty of 62%. The largest component of the emission (54%) was from the direct (soil) sector. Two of the three emission factors applied within the soil sector, EF1 (direct emission from soil) and EF3 PRP (emission from pasture range and paddock) were amongst the most influential on the total estimate, producing a ±31 and +11% to -17% change in emissions, respectively, when varied through the IPCC range from the default value. The indirect sector (from leached N and deposited ammonia) contributed 29% of the total emission, and had the largest uncertainty (126%). The factors determining the fraction of N leached (Frac LEACH) and emissions from it (EF5), were the two most influential. These parameters are poorly specified and there is great potential to improve the emission estimate for this component. Use of mathematical models (NCYCLE and SUNDIAL) to predict Frac LEACH suggested that the IPCC default value for this parameter may be too high for most situations in the UK. Comparison with other UK-derived inventories suggests that the IPCC methodology may overestimate emission. Although the IPCC approach includes additional components to the other inventories (most notably emission from indirect sources), estimates for the common components (i.e. fertiliser and animals), and emission factors used, are higher than those of other inventories. Whilst it is recognised that the IPCC approach is generalised in order to allow widespread applicability, sufficient data are available to specify at least two of the most influential parameters, i.e. EF1 and Frac LEACH, more accurately, and so provide an improved estimate of nitrous oxide emissions from UK agriculture.

  6. Parameter estimation and prediction of nonlinear biological systems: some examples

    NARCIS (Netherlands)

    Doeswijk, T.G.; Keesman, K.J.

    2006-01-01

    Rearranging and reparameterizing a discrete-time nonlinear model with polynomial quotient structure in input, output and parameters (xk = f(Z, p)) leads to a model linear in its (new) parameters. As a result, the parameter estimation problem becomes a so-called errors-in-variables problem for which

  7. Estimation of CO2 emissions from China’s cement production: Methodologies and uncertainties

    International Nuclear Information System (INIS)

    Ke, Jing; McNeil, Michael; Price, Lynn; Khanna, Nina Zheng; Zhou, Nan

    2013-01-01

    In 2010, China’s cement output was 1.9 Gt, which accounted for 56% of world cement production. Total carbon dioxide (CO 2 ) emissions from Chinese cement production could therefore exceed 1.2 Gt. The magnitude of emissions from this single industrial sector in one country underscores the need to understand the uncertainty of current estimates of cement emissions in China. This paper compares several methodologies for calculating CO 2 emissions from cement production, including the three main components of emissions: direct emissions from the calcination process for clinker production, direct emissions from fossil fuel combustion and indirect emissions from electricity consumption. This paper examines in detail the differences between common methodologies for each emission component, and considers their effect on total emissions. We then evaluate the overall level of uncertainty implied by the differences among methodologies according to recommendations of the Joint Committee for Guides in Metrology. We find a relative uncertainty in China’s cement-related emissions in the range of 10 to 18%. This result highlights the importance of understanding and refining methods of estimating emissions in this important industrial sector. - Highlights: ► CO 2 emission estimates are critical given China’s cement production scale. ► Methodological differences for emission components are compared. ► Results show relative uncertainty in China’s cement-related emissions of about 10%. ► IPCC Guidelines and CSI Cement CO 2 and Energy Protocol are recommended

  8. Development of Methodologies for the Estimation of Thermal Properties Associated with Aerospace Vehicles

    Science.gov (United States)

    Scott, Elaine P.

    1996-01-01

    A thermal stress analysis is an important aspect in the design of aerospace structures and vehicles such as the High Speed Civil Transport (HSCT) at the National Aeronautics and Space Administration Langley Research Center (NASA-LaRC). These structures are complex and are often composed of numerous components fabricated from a variety of different materials. The thermal loads on these structures induce temperature variations within the structure, which in turn result in the development of thermal stresses. Therefore, a thermal stress analysis requires knowledge of the temperature distributions within the structures which consequently necessitates the need for accurate knowledge of the thermal properties, boundary conditions and thermal interface conditions associated with the structural materials. The goal of this proposed multi-year research effort was to develop estimation methodologies for the determination of the thermal properties and interface conditions associated with aerospace vehicles. Specific objectives focused on the development and implementation of optimal experimental design strategies and methodologies for the estimation of thermal properties associated with simple composite and honeycomb structures. The strategy used in this multi-year research effort was to first develop methodologies for relatively simple systems and then systematically modify these methodologies to analyze complex structures. This can be thought of as a building block approach. This strategy was intended to promote maximum usability of the resulting estimation procedure by NASA-LARC researchers through the design of in-house experimentation procedures and through the use of an existing general purpose finite element software.

  9. CTER-rapid estimation of CTF parameters with error assessment.

    Science.gov (United States)

    Penczek, Pawel A; Fang, Jia; Li, Xueming; Cheng, Yifan; Loerke, Justus; Spahn, Christian M T

    2014-05-01

    In structural electron microscopy, the accurate estimation of the Contrast Transfer Function (CTF) parameters, particularly defocus and astigmatism, is of utmost importance for both initial evaluation of micrograph quality and for subsequent structure determination. Due to increases in the rate of data collection on modern microscopes equipped with new generation cameras, it is also important that the CTF estimation can be done rapidly and with minimal user intervention. Finally, in order to minimize the necessity for manual screening of the micrographs by a user it is necessary to provide an assessment of the errors of fitted parameters values. In this work we introduce CTER, a CTF parameters estimation method distinguished by its computational efficiency. The efficiency of the method makes it suitable for high-throughput EM data collection, and enables the use of a statistical resampling technique, bootstrap, that yields standard deviations of estimated defocus and astigmatism amplitude and angle, thus facilitating the automation of the process of screening out inferior micrograph data. Furthermore, CTER also outputs the spatial frequency limit imposed by reciprocal space aliasing of the discrete form of the CTF and the finite window size. We demonstrate the efficiency and accuracy of CTER using a data set collected on a 300kV Tecnai Polara (FEI) using the K2 Summit DED camera in super-resolution counting mode. Using CTER we obtained a structure of the 80S ribosome whose large subunit had a resolution of 4.03Å without, and 3.85Å with, inclusion of astigmatism parameters. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Influence of measurement errors and estimated parameters on combustion diagnosis

    International Nuclear Information System (INIS)

    Payri, F.; Molina, S.; Martin, J.; Armas, O.

    2006-01-01

    Thermodynamic diagnosis models are valuable tools for the study of Diesel combustion. Inputs required by such models comprise measured mean and instantaneous variables, together with suitable values for adjustable parameters used in different submodels. In the case of measured variables, one may estimate the uncertainty associated with measurement errors; however, the influence of errors in model parameter estimation may not be so easily established on an experimental basis. In this paper, a simulated pressure cycle has been used along with known input parameters, so that any uncertainty in the inputs is avoided. Then, the influence of errors in measured variables and geometric and heat transmission parameters on the results of a diagnosis combustion model for direct injection diesel engines have been studied. This procedure allowed to establish the relative importance of these parameters and to set limits to the maximal errors of the model, accounting for both the maximal expected errors in the input parameters and the sensitivity of the model to those errors

  11. Aircraft parameter estimation ± A tool for development of ...

    Indian Academy of Sciences (India)

    In addition, actuator performance and controller gains may be flight condition dependent. Moreover, this approach may result in open-loop parameter estimates with low accuracy. 6. Aerodynamic databases for high fidelity flight simulators. Estimation of a comprehensive aerodynamic model suitable for a flight simulator is an.

  12. A systematic methodology to estimate added sugar content of foods.

    Science.gov (United States)

    Louie, J C Y; Moshtaghian, H; Boylan, S; Flood, V M; Rangan, A M; Barclay, A W; Brand-Miller, J C; Gill, T P

    2015-02-01

    The effect of added sugar on health is a topical area of research. However, there is currently no analytical or other method to easily distinguish between added sugars and naturally occurring sugars in foods. This study aimed to develop a systematic methodology to estimate added sugar values on the basis of analytical data and ingredients of foods. A 10-step, stepwise protocol was developed, starting with objective measures (six steps) and followed by more subjective estimation (four steps) if insufficient objective data are available. The method developed was applied to an Australian food composition database (AUSNUT2007) as an example. Out of the 3874 foods available in AUSNUT2007, 2977 foods (77%) were assigned an estimated value on the basis of objective measures (steps 1-6), and 897 (23%) were assigned a subjectively estimated value (steps 7-10). Repeatability analysis showed good repeatability for estimated values in this method. We propose that this method can be considered as a standardised approach for the estimation of added sugar content of foods to improve cross-study comparison.

  13. Estimation of the laser cutting operating cost by support vector regression methodology

    Science.gov (United States)

    Jović, Srđan; Radović, Aleksandar; Šarkoćević, Živče; Petković, Dalibor; Alizamir, Meysam

    2016-09-01

    Laser cutting is a popular manufacturing process utilized to cut various types of materials economically. The operating cost is affected by laser power, cutting speed, assist gas pressure, nozzle diameter and focus point position as well as the workpiece material. In this article, the process factors investigated were: laser power, cutting speed, air pressure and focal point position. The aim of this work is to relate the operating cost to the process parameters mentioned above. CO2 laser cutting of stainless steel of medical grade AISI316L has been investigated. The main goal was to analyze the operating cost through the laser power, cutting speed, air pressure, focal point position and material thickness. Since the laser operating cost is a complex, non-linear task, soft computing optimization algorithms can be used. Intelligent soft computing scheme support vector regression (SVR) was implemented. The performance of the proposed estimator was confirmed with the simulation results. The SVR results are then compared with artificial neural network and genetic programing. According to the results, a greater improvement in estimation accuracy can be achieved through the SVR compared to other soft computing methodologies. The new optimization methods benefit from the soft computing capabilities of global optimization and multiobjective optimization rather than choosing a starting point by trial and error and combining multiple criteria into a single criterion.

  14. On the Nature of SEM Estimates of ARMA Parameters.

    Science.gov (United States)

    Hamaker, Ellen L.; Dolan, Conor V.; Molenaar, Peter C. M.

    2002-01-01

    Reexamined the nature of structural equation modeling (SEM) estimates of autoregressive moving average (ARMA) models, replicated the simulation experiments of P. Molenaar, and examined the behavior of the log-likelihood ratio test. Simulation studies indicate that estimates of ARMA parameters observed with SEM software are identical to those…

  15. Advanced Best-Estimate Methodologies for Thermal-Hydraulics Stability Analyses with TRACG code and Improvements on Operating Boiling Water Reactors

    International Nuclear Information System (INIS)

    Vedovi, J.; Trueba, M.; Ibarra, L; Espino, M.; Hoang, H.

    2016-01-01

    In recent years GE Hitachi has introduced two advanced methodologies to address the thermal-hydraulics instabilities in Boiling Water Reactors (BWRs); the “Detect and Suppress Solution - Confirmation Density (DSS-CD)” and the “GEH Simplified Stability Solution (GS3).” These two methodologies are based on Best-Estimate Plus Uncertainty (BEPU) analyses and provide significant improvement on safety, plant maneuvering and fuel economics with respect to existing solutions. DSS-CD and GS3 solutions have been recently approved by the United States Nuclear Regulatory Commission. This paper describes the main characteristics of these two stability methodologies and shares the experience of their recent implementation in operating BWRs. The BEPU approach provided a much deeper understanding of the parameters affecting instabilities in operating BWRs and allowed for better calculation of plant setpoints by improving plant manoeuvring restrictions and reducing manual operator actions. DSS-CD and GS3 methodologies are both based on safety analyses performed with the best-estimate system code TRACG. The assessment of uncertainty is performed following the Code Scaling, Applicability and Uncertainty (CSAU) methodology documented in NUREG/CR-5249. The two solutions have been already implemented in a combined 18 BWR units with 7 more units in the process of transitioning. The main results demonstrate a significant decrease (>0.1) in the stability based Operating Limit Minimum Critical Power Ratio (OLMCPR), which possibly results in significant fuel savings and the increase in allowable stability plant setpoints that address instability events such as the one occurred at the Fermi 2 plant in 2015 and can help prevent unnecessary Scrams. The paper also describes the advantages of reduced plant manoeuvring as a result to transitioning to these solutions; in particular the history of a BWR/6 transition to DSS-CD is discussed.

  16. Small sample GEE estimation of regression parameters for longitudinal data.

    Science.gov (United States)

    Paul, Sudhir; Zhang, Xuemao

    2014-09-28

    Longitudinal (clustered) response data arise in many bio-statistical applications which, in general, cannot be assumed to be independent. Generalized estimating equation (GEE) is a widely used method to estimate marginal regression parameters for correlated responses. The advantage of the GEE is that the estimates of the regression parameters are asymptotically unbiased even if the correlation structure is misspecified, although their small sample properties are not known. In this paper, two bias adjusted GEE estimators of the regression parameters in longitudinal data are obtained when the number of subjects is small. One is based on a bias correction, and the other is based on a bias reduction. Simulations show that the performances of both the bias-corrected methods are similar in terms of bias, efficiency, coverage probability, average coverage length, impact of misspecification of correlation structure, and impact of cluster size on bias correction. Both these methods show superior properties over the GEE estimates for small samples. Further, analysis of data involving a small number of subjects also shows improvement in bias, MSE, standard error, and length of the confidence interval of the estimates by the two bias adjusted methods over the GEE estimates. For small to moderate sample sizes (N ≤50), either of the bias-corrected methods GEEBc and GEEBr can be used. However, the method GEEBc should be preferred over GEEBr, as the former is computationally easier. For large sample sizes, the GEE method can be used. Copyright © 2014 John Wiley & Sons, Ltd.

  17. Parameter estimation techniques and uncertainty in ground water flow model predictions

    International Nuclear Information System (INIS)

    Zimmerman, D.A.; Davis, P.A.

    1990-01-01

    Quantification of uncertainty in predictions of nuclear waste repository performance is a requirement of Nuclear Regulatory Commission regulations governing the licensing of proposed geologic repositories for high-level radioactive waste disposal. One of the major uncertainties in these predictions is in estimating the ground-water travel time of radionuclides migrating from the repository to the accessible environment. The cause of much of this uncertainty has been attributed to a lack of knowledge about the hydrogeologic properties that control the movement of radionuclides through the aquifers. A major reason for this lack of knowledge is the paucity of data that is typically available for characterizing complex ground-water flow systems. Because of this, considerable effort has been put into developing parameter estimation techniques that infer property values in regions where no measurements exist. Currently, no single technique has been shown to be superior or even consistently conservative with respect to predictions of ground-water travel time. This work was undertaken to compare a number of parameter estimation techniques and to evaluate how differences in the parameter estimates and the estimation errors are reflected in the behavior of the flow model predictions. That is, we wished to determine to what degree uncertainties in flow model predictions may be affected simply by the choice of parameter estimation technique used. 3 refs., 2 figs

  18. Genetic Parameter Estimates for Metabolizing Two Common Pharmaceuticals in Swine

    Directory of Open Access Journals (Sweden)

    Jeremy T. Howard

    2018-02-01

    Full Text Available In livestock, the regulation of drugs used to treat livestock has received increased attention and it is currently unknown how much of the phenotypic variation in drug metabolism is due to the genetics of an animal. Therefore, the objective of the study was to determine the amount of phenotypic variation in fenbendazole and flunixin meglumine drug metabolism due to genetics. The population consisted of crossbred female and castrated male nursery pigs (n = 198 that were sired by boars represented by four breeds. The animals were spread across nine batches. Drugs were administered intravenously and blood collected a minimum of 10 times over a 48 h period. Genetic parameters for the parent drug and metabolite concentration within each drug were estimated based on pharmacokinetics (PK parameters or concentrations across time utilizing a random regression model. The PK parameters were estimated using a non-compartmental analysis. The PK model included fixed effects of sex and breed of sire along with random sire and batch effects. The random regression model utilized Legendre polynomials and included a fixed population concentration curve, sex, and breed of sire effects along with a random sire deviation from the population curve and batch effect. The sire effect included the intercept for all models except for the fenbendazole metabolite (i.e., intercept and slope. The mean heritability across PK parameters for the fenbendazole and flunixin meglumine parent drug (metabolite was 0.15 (0.18 and 0.31 (0.40, respectively. For the parent drug (metabolite, the mean heritability across time was 0.27 (0.60 and 0.14 (0.44 for fenbendazole and flunixin meglumine, respectively. The errors surrounding the heritability estimates for the random regression model were smaller compared to estimates obtained from PK parameters. Across both the PK and plasma drug concentration across model, a moderate heritability was estimated. The model that utilized the plasma drug

  19. Genetic Parameter Estimates for Metabolizing Two Common Pharmaceuticals in Swine

    Science.gov (United States)

    Howard, Jeremy T.; Ashwell, Melissa S.; Baynes, Ronald E.; Brooks, James D.; Yeatts, James L.; Maltecca, Christian

    2018-01-01

    In livestock, the regulation of drugs used to treat livestock has received increased attention and it is currently unknown how much of the phenotypic variation in drug metabolism is due to the genetics of an animal. Therefore, the objective of the study was to determine the amount of phenotypic variation in fenbendazole and flunixin meglumine drug metabolism due to genetics. The population consisted of crossbred female and castrated male nursery pigs (n = 198) that were sired by boars represented by four breeds. The animals were spread across nine batches. Drugs were administered intravenously and blood collected a minimum of 10 times over a 48 h period. Genetic parameters for the parent drug and metabolite concentration within each drug were estimated based on pharmacokinetics (PK) parameters or concentrations across time utilizing a random regression model. The PK parameters were estimated using a non-compartmental analysis. The PK model included fixed effects of sex and breed of sire along with random sire and batch effects. The random regression model utilized Legendre polynomials and included a fixed population concentration curve, sex, and breed of sire effects along with a random sire deviation from the population curve and batch effect. The sire effect included the intercept for all models except for the fenbendazole metabolite (i.e., intercept and slope). The mean heritability across PK parameters for the fenbendazole and flunixin meglumine parent drug (metabolite) was 0.15 (0.18) and 0.31 (0.40), respectively. For the parent drug (metabolite), the mean heritability across time was 0.27 (0.60) and 0.14 (0.44) for fenbendazole and flunixin meglumine, respectively. The errors surrounding the heritability estimates for the random regression model were smaller compared to estimates obtained from PK parameters. Across both the PK and plasma drug concentration across model, a moderate heritability was estimated. The model that utilized the plasma drug

  20. Measurement-Based Transmission Line Parameter Estimation with Adaptive Data Selection Scheme

    DEFF Research Database (Denmark)

    Li, Changgang; Zhang, Yaping; Zhang, Hengxu

    2017-01-01

    Accurate parameters of transmission lines are critical for power system operation and control decision making. Transmission line parameter estimation based on measured data is an effective way to enhance the validity of the parameters. This paper proposes a multi-point transmission line parameter...

  1. Automatic estimation of elasticity parameters in breast tissue

    Science.gov (United States)

    Skerl, Katrin; Cochran, Sandy; Evans, Andrew

    2014-03-01

    Shear wave elastography (SWE), a novel ultrasound imaging technique, can provide unique information about cancerous tissue. To estimate elasticity parameters, a region of interest (ROI) is manually positioned over the stiffest part of the shear wave image (SWI). The aim of this work is to estimate the elasticity parameters i.e. mean elasticity, maximal elasticity and standard deviation, fully automatically. Ultrasonic SWI of a breast elastography phantom and breast tissue in vivo were acquired using the Aixplorer system (SuperSonic Imagine, Aix-en-Provence, France). First, the SWI within the ultrasonic B-mode image was detected using MATLAB then the elasticity values were extracted. The ROI was automatically positioned over the stiffest part of the SWI and the elasticity parameters were calculated. Finally all values were saved in a spreadsheet which also contains the patient's study ID. This spreadsheet is easily available for physicians and clinical staff for further evaluation and so increase efficiency. Therewith the efficiency is increased. This algorithm simplifies the handling, especially for the performance and evaluation of clinical trials. The SWE processing method allows physicians easy access to the elasticity parameters of the examinations from their own and other institutions. This reduces clinical time and effort and simplifies evaluation of data in clinical trials. Furthermore, reproducibility will be improved.

  2. Systematic methodology for estimating direct capital costs for blanket tritium processing systems

    International Nuclear Information System (INIS)

    Finn, P.A.

    1985-01-01

    This paper describes the methodology developed for estimating the relative capital costs of blanket processing systems. The capital costs of the nine blanket concepts selected in the Blanket Comparison and Selection Study are presented and compared

  3. Adaptive Parameter Estimation of Person Recognition Model in a Stochastic Human Tracking Process

    Science.gov (United States)

    Nakanishi, W.; Fuse, T.; Ishikawa, T.

    2015-05-01

    This paper aims at an estimation of parameters of person recognition models using a sequential Bayesian filtering method. In many human tracking method, any parameters of models used for recognize the same person in successive frames are usually set in advance of human tracking process. In real situation these parameters may change according to situation of observation and difficulty level of human position prediction. Thus in this paper we formulate an adaptive parameter estimation using general state space model. Firstly we explain the way to formulate human tracking in general state space model with their components. Then referring to previous researches, we use Bhattacharyya coefficient to formulate observation model of general state space model, which is corresponding to person recognition model. The observation model in this paper is a function of Bhattacharyya coefficient with one unknown parameter. At last we sequentially estimate this parameter in real dataset with some settings. Results showed that sequential parameter estimation was succeeded and were consistent with observation situations such as occlusions.

  4. CosmoSIS: A System for MC Parameter Estimation

    Energy Technology Data Exchange (ETDEWEB)

    Zuntz, Joe [Manchester U.; Paterno, Marc [Fermilab; Jennings, Elise [Chicago U., EFI; Rudd, Douglas [U. Chicago; Manzotti, Alessandro [Chicago U., Astron. Astrophys. Ctr.; Dodelson, Scott [Chicago U., Astron. Astrophys. Ctr.; Bridle, Sarah [Manchester U.; Sehrish, Saba [Fermilab; Kowalkowski, James [Fermilab

    2015-01-01

    Cosmological parameter estimation is entering a new era. Large collaborations need to coordinate high-stakes analyses using multiple methods; furthermore such analyses have grown in complexity due to sophisticated models of cosmology and systematic uncertainties. In this paper we argue that modularity is the key to addressing these challenges: calculations should be broken up into interchangeable modular units with inputs and outputs clearly defined. We present a new framework for cosmological parameter estimation, CosmoSIS, designed to connect together, share, and advance development of inference tools across the community. We describe the modules already available in Cosmo- SIS, including camb, Planck, cosmic shear calculations, and a suite of samplers. We illustrate it using demonstration code that you can run out-of-the-box with the installer available at http://bitbucket.org/joezuntz/cosmosis.

  5. A Parameter Estimation Method for Nonlinear Systems Based on Improved Boundary Chicken Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Shaolong Chen

    2016-01-01

    Full Text Available Parameter estimation is an important problem in nonlinear system modeling and control. Through constructing an appropriate fitness function, parameter estimation of system could be converted to a multidimensional parameter optimization problem. As a novel swarm intelligence algorithm, chicken swarm optimization (CSO has attracted much attention owing to its good global convergence and robustness. In this paper, a method based on improved boundary chicken swarm optimization (IBCSO is proposed for parameter estimation of nonlinear systems, demonstrated and tested by Lorenz system and a coupling motor system. Furthermore, we have analyzed the influence of time series on the estimation accuracy. Computer simulation results show it is feasible and with desirable performance for parameter estimation of nonlinear systems.

  6. Test models for improving filtering with model errors through stochastic parameter estimation

    International Nuclear Information System (INIS)

    Gershgorin, B.; Harlim, J.; Majda, A.J.

    2010-01-01

    The filtering skill for turbulent signals from nature is often limited by model errors created by utilizing an imperfect model for filtering. Updating the parameters in the imperfect model through stochastic parameter estimation is one way to increase filtering skill and model performance. Here a suite of stringent test models for filtering with stochastic parameter estimation is developed based on the Stochastic Parameterization Extended Kalman Filter (SPEKF). These new SPEKF-algorithms systematically correct both multiplicative and additive biases and involve exact formulas for propagating the mean and covariance including the parameters in the test model. A comprehensive study is presented of robust parameter regimes for increasing filtering skill through stochastic parameter estimation for turbulent signals as the observation time and observation noise are varied and even when the forcing is incorrectly specified. The results here provide useful guidelines for filtering turbulent signals in more complex systems with significant model errors.

  7. Variational estimates of point-kinetics parameters

    International Nuclear Information System (INIS)

    Favorite, J.A.; Stacey, W.M. Jr.

    1995-01-01

    Variational estimates of the effect of flux shifts on the integral reactivity parameter of the point-kinetics equations and on regional power fractions were calculated for a variety of localized perturbations in two light water reactor (LWR) model problems representing a small, tightly coupled core and a large, loosely coupled core. For the small core, the flux shifts resulting from even relatively large localized reactivity changes (∼600 pcm) were small, and the standard point-kinetics approximation estimates of reactivity were in error by only ∼10% or less, while the variational estimates were accurate to within ∼1%. For the larger core, significant (>50%) flux shifts occurred in response to local perturbations, leading to errors of the same magnitude in the standard point-kinetics approximation of the reactivity worth. For positive reactivity, the error in the variational estimate of reactivity was only a few percent in the larger core, and the resulting transient power prediction was 1 to 2 orders of magnitude more accurate than with the standard point-kinetics approximation. For a large, local negative reactivity insertion resulting in a large flux shift, the accuracy of the variational estimate broke down. The variational estimate of the effect of flux shifts on reactivity in point-kinetics calculations of transients in LWR cores was found to generally result in greatly improved accuracy, relative to the standard point-kinetics approximation, the exception being for large negative reactivity insertions with large flux shifts in large, loosely coupled cores

  8. Parameter estimation in tree graph metabolic networks

    Directory of Open Access Journals (Sweden)

    Laura Astola

    2016-09-01

    Full Text Available We study the glycosylation processes that convert initially toxic substrates to nutritionally valuable metabolites in the flavonoid biosynthesis pathway of tomato (Solanum lycopersicum seedlings. To estimate the reaction rates we use ordinary differential equations (ODEs to model the enzyme kinetics. A popular choice is to use a system of linear ODEs with constant kinetic rates or to use Michaelis–Menten kinetics. In reality, the catalytic rates, which are affected among other factors by kinetic constants and enzyme concentrations, are changing in time and with the approaches just mentioned, this phenomenon cannot be described. Another problem is that, in general these kinetic coefficients are not always identifiable. A third problem is that, it is not precisely known which enzymes are catalyzing the observed glycosylation processes. With several hundred potential gene candidates, experimental validation using purified target proteins is expensive and time consuming. We aim at reducing this task via mathematical modeling to allow for the pre-selection of most potential gene candidates. In this article we discuss a fast and relatively simple approach to estimate time varying kinetic rates, with three favorable properties: firstly, it allows for identifiable estimation of time dependent parameters in networks with a tree-like structure. Secondly, it is relatively fast compared to usually applied methods that estimate the model derivatives together with the network parameters. Thirdly, by combining the metabolite concentration data with a corresponding microarray data, it can help in detecting the genes related to the enzymatic processes. By comparing the estimated time dynamics of the catalytic rates with time series gene expression data we may assess potential candidate genes behind enzymatic reactions. As an example, we show how to apply this method to select prominent glycosyltransferase genes in tomato seedlings.

  9. Parameter estimation in tree graph metabolic networks.

    Science.gov (United States)

    Astola, Laura; Stigter, Hans; Gomez Roldan, Maria Victoria; van Eeuwijk, Fred; Hall, Robert D; Groenenboom, Marian; Molenaar, Jaap J

    2016-01-01

    We study the glycosylation processes that convert initially toxic substrates to nutritionally valuable metabolites in the flavonoid biosynthesis pathway of tomato (Solanum lycopersicum) seedlings. To estimate the reaction rates we use ordinary differential equations (ODEs) to model the enzyme kinetics. A popular choice is to use a system of linear ODEs with constant kinetic rates or to use Michaelis-Menten kinetics. In reality, the catalytic rates, which are affected among other factors by kinetic constants and enzyme concentrations, are changing in time and with the approaches just mentioned, this phenomenon cannot be described. Another problem is that, in general these kinetic coefficients are not always identifiable. A third problem is that, it is not precisely known which enzymes are catalyzing the observed glycosylation processes. With several hundred potential gene candidates, experimental validation using purified target proteins is expensive and time consuming. We aim at reducing this task via mathematical modeling to allow for the pre-selection of most potential gene candidates. In this article we discuss a fast and relatively simple approach to estimate time varying kinetic rates, with three favorable properties: firstly, it allows for identifiable estimation of time dependent parameters in networks with a tree-like structure. Secondly, it is relatively fast compared to usually applied methods that estimate the model derivatives together with the network parameters. Thirdly, by combining the metabolite concentration data with a corresponding microarray data, it can help in detecting the genes related to the enzymatic processes. By comparing the estimated time dynamics of the catalytic rates with time series gene expression data we may assess potential candidate genes behind enzymatic reactions. As an example, we show how to apply this method to select prominent glycosyltransferase genes in tomato seedlings.

  10. Physical parameter determinations of young Ms. Taking advantage of the Virtual Observatory to compare methodologies

    Science.gov (United States)

    Bayo, A.; Rodrigo, C.; Barrado, D.; Allard, F.

    One of the very first steps astronomers working in stellar physics perform to advance in their studies, is to determine the most common/relevant physical parameters of the objects of study (effective temperature, bolometric luminosity, surface gravity, etc.). Different methodologies exist depending on the nature of the data, intrinsic properties of the objects, etc. One common approach is to compare the observational data with theoretical models passed through some simulator that will leave in the synthetic data the same imprint than the observational data carries, and see what set of parameters reproduce the observations best. Even in this case, depending on the kind of data the astronomer has, the methodology changes slightly. After parameters are published, the community tend to quote, praise and criticize them, sometimes paying little attention on whether the possible discrepancies come from the theoretical models, the data themselves or just the methodology used in the analysis. In this work we perform the simple, yet interesting, exercise of comparing the effective temperatures obtained via SED and more detailed spectral fittings (to the same grid of models), of a sample of well known and characterized young M-type objects members to different star forming regions and show how differences in temperature of up to 350 K can be expected just from the difference in methodology/data used. On the other hand we show how these differences are smaller for colder objects even when the complexity of the fit increases like for example introducing differential extinction. To perform this exercise we benefit greatly from the framework offered by the Virtual Observaotry.

  11. Estimation of metallurgical parameters of flotation process from froth visual features

    Directory of Open Access Journals (Sweden)

    Mohammad Massinaei

    2015-06-01

    Full Text Available The estimation of metallurgical parameters of flotation process from froth visual features is the ultimate goal of a machine vision based control system. In this study, a batch flotation system was operated under different process conditions and metallurgical parameters and froth image data were determined simultaneously. Algorithms have been developed for measuring textural and physical froth features from the captured images. The correlation between the froth features and metallurgical parameters was successfully modeled, using artificial neural networks. It has been shown that the performance parameters of flotation process can be accurately estimated from the extracted image features, which is of great importance for developing automatic control systems.

  12. Set-base dynamical parameter estimation and model invalidation for biochemical reaction networks.

    Science.gov (United States)

    Rumschinski, Philipp; Borchers, Steffen; Bosio, Sandro; Weismantel, Robert; Findeisen, Rolf

    2010-05-25

    Mathematical modeling and analysis have become, for the study of biological and cellular processes, an important complement to experimental research. However, the structural and quantitative knowledge available for such processes is frequently limited, and measurements are often subject to inherent and possibly large uncertainties. This results in competing model hypotheses, whose kinetic parameters may not be experimentally determinable. Discriminating among these alternatives and estimating their kinetic parameters is crucial to improve the understanding of the considered process, and to benefit from the analytical tools at hand. In this work we present a set-based framework that allows to discriminate between competing model hypotheses and to provide guaranteed outer estimates on the model parameters that are consistent with the (possibly sparse and uncertain) experimental measurements. This is obtained by means of exact proofs of model invalidity that exploit the polynomial/rational structure of biochemical reaction networks, and by making use of an efficient strategy to balance solution accuracy and computational effort. The practicability of our approach is illustrated with two case studies. The first study shows that our approach allows to conclusively rule out wrong model hypotheses. The second study focuses on parameter estimation, and shows that the proposed method allows to evaluate the global influence of measurement sparsity, uncertainty, and prior knowledge on the parameter estimates. This can help in designing further experiments leading to improved parameter estimates.

  13. A regressive methodology for estimating missing data in rainfall daily time series

    Science.gov (United States)

    Barca, E.; Passarella, G.

    2009-04-01

    The "presence" of gaps in environmental data time series represents a very common, but extremely critical problem, since it can produce biased results (Rubin, 1976). Missing data plagues almost all surveys. The problem is how to deal with missing data once it has been deemed impossible to recover the actual missing values. Apart from the amount of missing data, another issue which plays an important role in the choice of any recovery approach is the evaluation of "missingness" mechanisms. When data missing is conditioned by some other variable observed in the data set (Schafer, 1997) the mechanism is called MAR (Missing at Random). Otherwise, when the missingness mechanism depends on the actual value of the missing data, it is called NCAR (Not Missing at Random). This last is the most difficult condition to model. In the last decade interest arose in the estimation of missing data by using regression (single imputation). More recently multiple imputation has become also available, which returns a distribution of estimated values (Scheffer, 2002). In this paper an automatic methodology for estimating missing data is presented. In practice, given a gauging station affected by missing data (target station), the methodology checks the randomness of the missing data and classifies the "similarity" between the target station and the other gauging stations spread over the study area. Among different methods useful for defining the similarity degree, whose effectiveness strongly depends on the data distribution, the Spearman correlation coefficient was chosen. Once defined the similarity matrix, a suitable, nonparametric, univariate, and regressive method was applied in order to estimate missing data in the target station: the Theil method (Theil, 1950). Even though the methodology revealed to be rather reliable an improvement of the missing data estimation can be achieved by a generalization. A first possible improvement consists in extending the univariate technique to

  14. Estimating Convection Parameters in the GFDL CM2.1 Model Using Ensemble Data Assimilation

    Science.gov (United States)

    Li, Shan; Zhang, Shaoqing; Liu, Zhengyu; Lu, Lv; Zhu, Jiang; Zhang, Xuefeng; Wu, Xinrong; Zhao, Ming; Vecchi, Gabriel A.; Zhang, Rong-Hua; Lin, Xiaopei

    2018-04-01

    Parametric uncertainty in convection parameterization is one major source of model errors that cause model climate drift. Convection parameter tuning has been widely studied in atmospheric models to help mitigate the problem. However, in a fully coupled general circulation model (CGCM), convection parameters which impact the ocean as well as the climate simulation may have different optimal values. This study explores the possibility of estimating convection parameters with an ensemble coupled data assimilation method in a CGCM. Impacts of the convection parameter estimation on climate analysis and forecast are analyzed. In a twin experiment framework, five convection parameters in the GFDL coupled model CM2.1 are estimated individually and simultaneously under both perfect and imperfect model regimes. Results show that the ensemble data assimilation method can help reduce the bias in convection parameters. With estimated convection parameters, the analyses and forecasts for both the atmosphere and the ocean are generally improved. It is also found that information in low latitudes is relatively more important for estimating convection parameters. This study further suggests that when important parameters in appropriate physical parameterizations are identified, incorporating their estimation into traditional ensemble data assimilation procedure could improve the final analysis and climate prediction.

  15. Methodology for estimating reprocessing costs for nuclear fuels

    International Nuclear Information System (INIS)

    Carter, W.L.; Rainey, R.H.

    1980-02-01

    A technological and economic evaluation of reprocessing requirements for alternate fuel cycles requires a common assessment method and a common basis to which various cycles can be related. A methodology is described for the assessment of alternate fuel cycles utilizing a side-by-side comparison of functional flow diagrams of major areas of the reprocessing plant with corresponding diagrams of the well-developed Purex process as installed in the Barnwell Nuclear Fuel Plant (BNFP). The BNFP treats 1500 metric tons of uranium per year (MTU/yr). Complexity and capacity factors are determined for adjusting the estimated facility and equipment costs of BNFP to determine the corresponding costs for the alternate fuel cycle. Costs of capacities other than the reference 1500 MT of heavy metal per year are estimated by the use of scaling factors. Unit costs of reprocessed fuel are calculated using a discounted cash flow analysis for three economic bases to show the effect of low-risk, typical, and high-risk financing methods

  16. Model parameters estimation and sensitivity by genetic algorithms

    International Nuclear Information System (INIS)

    Marseguerra, Marzio; Zio, Enrico; Podofillini, Luca

    2003-01-01

    In this paper we illustrate the possibility of extracting qualitative information on the importance of the parameters of a model in the course of a Genetic Algorithms (GAs) optimization procedure for the estimation of such parameters. The Genetic Algorithms' search of the optimal solution is performed according to procedures that resemble those of natural selection and genetics: an initial population of alternative solutions evolves within the search space through the four fundamental operations of parent selection, crossover, replacement, and mutation. During the search, the algorithm examines a large amount of solution points which possibly carries relevant information on the underlying model characteristics. A possible utilization of this information amounts to create and update an archive with the set of best solutions found at each generation and then to analyze the evolution of the statistics of the archive along the successive generations. From this analysis one can retrieve information regarding the speed of convergence and stabilization of the different control (decision) variables of the optimization problem. In this work we analyze the evolution strategy followed by a GA in its search for the optimal solution with the aim of extracting information on the importance of the control (decision) variables of the optimization with respect to the sensitivity of the objective function. The study refers to a GA search for optimal estimates of the effective parameters in a lumped nuclear reactor model of literature. The supporting observation is that, as most optimization procedures do, the GA search evolves towards convergence in such a way to stabilize first the most important parameters of the model and later those which influence little the model outputs. In this sense, besides estimating efficiently the parameters values, the optimization approach also allows us to provide a qualitative ranking of their importance in contributing to the model output. The

  17. Pattern statistics on Markov chains and sensitivity to parameter estimation

    Directory of Open Access Journals (Sweden)

    Nuel Grégory

    2006-10-01

    Full Text Available Abstract Background: In order to compute pattern statistics in computational biology a Markov model is commonly used to take into account the sequence composition. Usually its parameter must be estimated. The aim of this paper is to determine how sensitive these statistics are to parameter estimation, and what are the consequences of this variability on pattern studies (finding the most over-represented words in a genome, the most significant common words to a set of sequences,.... Results: In the particular case where pattern statistics (overlap counting only computed through binomial approximations we use the delta-method to give an explicit expression of σ, the standard deviation of a pattern statistic. This result is validated using simulations and a simple pattern study is also considered. Conclusion: We establish that the use of high order Markov model could easily lead to major mistakes due to the high sensitivity of pattern statistics to parameter estimation.

  18. Hierarchical parameter estimation of DFIG and drive train system in a wind turbine generator

    Institute of Scientific and Technical Information of China (English)

    Xueping PAN; Ping JU; Feng WU; Yuqing JIN

    2017-01-01

    A new hierarchical parameter estimation method for doubly fed induction generator (DFIG) and drive train system in a wind turbine generator (WTG) is proposed in this paper.Firstly,the parameters of the DFIG and the drive train are estimated locally under different types of disturbances.Secondly,a coordination estimation method is further applied to identify the parameters of the DFIG and the drive train simultaneously with the purpose of attaining the global optimal estimation results.The main benefit of the proposed scheme is the improved estimation accuracy.Estimation results confirm the applicability of the proposed estimation technique.

  19. A hybrid optimization approach to the estimation of distributed parameters in two-dimensional confined aquifers

    Science.gov (United States)

    Heidari, M.; Ranjithan, S.R.

    1998-01-01

    In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is experimentally demonstrated that only one piece of prior information of the least sensitive parameter is sufficient to arrive at the global or near-global optimum solution. For hydraulic head data with measurement errors, the error in the estimation of parameters increases as the standard deviation of the errors increases. Results from our experiments show that, in general, the accuracy of the estimated parameters depends on the level of noise in the hydraulic head data and the initial values used in the truncated-Newton search technique.In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is

  20. Rational Choice of the Investment Project Using Interval Estimates of the Initial Parameters

    Directory of Open Access Journals (Sweden)

    Kotsyuba Oleksiy S.

    2016-11-01

    Full Text Available The article is dedicated to the development of instruments to support decision-making on the problem of choosing the best investment project in a situation when initial quantitative parameters of the considered investment alternatives are described by interval estimates. In terms of managing the risk caused by interval uncertainty of the initial data, the study is limited to the component (aspect of risk measure as a degree of possibility of discrepancy between the resulting economic indicator (criterion and its normative level (the norm. An important hypothesis used as a basis for the proposed in the work formalization of the problem under consideration is the presence – for some or all of the projects from which the choice is made – of risk of poor rate of return in terms of net present (current value. Based upon relevant developments within the framework of the fuzzy-set methodology and interval analysis, there formulated a model for choosing an optimal investment project from the set of alternative options for the interval formulation of the problem. In this case it is assumed that indicators of economic attractiveness (performance of the compared directions of real investment are described either by interval estimates or possibility distribution functions. With the help of the estimated conditional example there implemented an approbation of the proposed model, which demonstrated its practical viability.

  1. A statistical methodology for the estimation of extreme wave conditions for offshore renewable applications

    DEFF Research Database (Denmark)

    Larsén, Xiaoli Guo; Kalogeri, Christina; Galanis, George

    2015-01-01

    and post-process outputs from a high resolution numerical wave modeling system for extreme wave estimation based on the significant wave height. This approach is demonstrated through the data analysis at a relatively deep water site, FINO 1, as well as a relatively shallow water area, coastal site Horns...... as a characteristic index of extreme wave conditions. The results from the proposed methodology seem to be in a good agreement with the measurements at both the relatively deep, open water and the shallow, coastal water sites, providing a potentially useful tool for offshore renewable energy applications. © 2015...... Rev, which is located in the North Sea, west of Denmark. The post-processing targets at correcting the modeled time series of the significant wave height, in order to match the statistics of the corresponding measurements, including not only the conventional parameters such as the mean and standard...

  2. A Note On the Estimation of the Poisson Parameter

    Directory of Open Access Journals (Sweden)

    S. S. Chitgopekar

    1985-01-01

    distribution when there are errors in observing the zeros and ones and obtains both the maximum likelihood and moments estimates of the Poisson mean and the error probabilities. It is interesting to note that either method fails to give unique estimates of these parameters unless the error probabilities are functionally related. However, it is equally interesting to observe that the estimate of the Poisson mean does not depend on the functional relationship between the error probabilities.

  3. Australian methodology for the estimation of greenhouse gas emissions and sinks: Agriculture: Workbook for livestock: Workbook 6.0

    Energy Technology Data Exchange (ETDEWEB)

    Bureau of Resource Sciences, Canberra, ACT (Australia)

    1994-12-31

    This workbook details a methodology for estimating methane emissions from Australian livestock. The workbook is designed to be consistent with international guidelines and takes into account special Australian conditions. While regarded as a significant source of anthropogenic methane emissions, it is also acknowledged in this document that livestock do not provide sinks for methane or any other greenhouse gas. Methane can originate from both fermentation processes in the digestive tracts of all livestock and from manure under certain management conditions. Methane emissions were estimated from beef cattle, dairy cattle, sheep, pigs, poultry, goats, horses, deer, buffalo, camels, emus and ostriches, alpacas and donkeys and mules. Two methodologies were used to estimate emissions. One is the standard Intergovernmental Panel on Climate Change (IPCC) Tier 1 methodology that is needed to provide inter-country comparisons of emissions. The other has been developed by the Inventory Methodology Working Group. It represents the best current Australian method for estimating greenhouse gas emissions from Australian livestock. (author). 6 tabs., 22 refs.

  4. PAPIRUS, a parallel computing framework for sensitivity analysis, uncertainty propagation, and estimation of parameter distribution

    International Nuclear Information System (INIS)

    Heo, Jaeseok; Kim, Kyung Doo

    2015-01-01

    Highlights: • We developed an interface between an engineering simulation code and statistical analysis software. • Multiple packages of the sensitivity analysis, uncertainty quantification, and parameter estimation algorithms are implemented in the framework. • Parallel computing algorithms are also implemented in the framework to solve multiple computational problems simultaneously. - Abstract: This paper introduces a statistical data analysis toolkit, PAPIRUS, designed to perform the model calibration, uncertainty propagation, Chi-square linearity test, and sensitivity analysis for both linear and nonlinear problems. The PAPIRUS was developed by implementing multiple packages of methodologies, and building an interface between an engineering simulation code and the statistical analysis algorithms. A parallel computing framework is implemented in the PAPIRUS with multiple computing resources and proper communications between the server and the clients of each processor. It was shown that even though a large amount of data is considered for the engineering calculation, the distributions of the model parameters and the calculation results can be quantified accurately with significant reductions in computational effort. A general description about the PAPIRUS with a graphical user interface is presented in Section 2. Sections 2.1–2.5 present the methodologies of data assimilation, uncertainty propagation, Chi-square linearity test, and sensitivity analysis implemented in the toolkit with some results obtained by each module of the software. Parallel computing algorithms adopted in the framework to solve multiple computational problems simultaneously are also summarized in the paper

  5. PAPIRUS, a parallel computing framework for sensitivity analysis, uncertainty propagation, and estimation of parameter distribution

    Energy Technology Data Exchange (ETDEWEB)

    Heo, Jaeseok, E-mail: jheo@kaeri.re.kr; Kim, Kyung Doo, E-mail: kdkim@kaeri.re.kr

    2015-10-15

    Highlights: • We developed an interface between an engineering simulation code and statistical analysis software. • Multiple packages of the sensitivity analysis, uncertainty quantification, and parameter estimation algorithms are implemented in the framework. • Parallel computing algorithms are also implemented in the framework to solve multiple computational problems simultaneously. - Abstract: This paper introduces a statistical data analysis toolkit, PAPIRUS, designed to perform the model calibration, uncertainty propagation, Chi-square linearity test, and sensitivity analysis for both linear and nonlinear problems. The PAPIRUS was developed by implementing multiple packages of methodologies, and building an interface between an engineering simulation code and the statistical analysis algorithms. A parallel computing framework is implemented in the PAPIRUS with multiple computing resources and proper communications between the server and the clients of each processor. It was shown that even though a large amount of data is considered for the engineering calculation, the distributions of the model parameters and the calculation results can be quantified accurately with significant reductions in computational effort. A general description about the PAPIRUS with a graphical user interface is presented in Section 2. Sections 2.1–2.5 present the methodologies of data assimilation, uncertainty propagation, Chi-square linearity test, and sensitivity analysis implemented in the toolkit with some results obtained by each module of the software. Parallel computing algorithms adopted in the framework to solve multiple computational problems simultaneously are also summarized in the paper.

  6. PARAMETER ESTIMATION IN BREAD BAKING MODEL

    Directory of Open Access Journals (Sweden)

    Hadiyanto Hadiyanto

    2012-05-01

    Full Text Available Bread product quality is highly dependent to the baking process. A model for the development of product quality, which was obtained by using quantitative and qualitative relationships, was calibrated by experiments at a fixed baking temperature of 200°C alone and in combination with 100 W microwave powers. The model parameters were estimated in a stepwise procedure i.e. first, heat and mass transfer related parameters, then the parameters related to product transformations and finally product quality parameters. There was a fair agreement between the calibrated model results and the experimental data. The results showed that the applied simple qualitative relationships for quality performed above expectation. Furthermore, it was confirmed that the microwave input is most meaningful for the internal product properties and not for the surface properties as crispness and color. The model with adjusted parameters was applied in a quality driven food process design procedure to derive a dynamic operation pattern, which was subsequently tested experimentally to calibrate the model. Despite the limited calibration with fixed operation settings, the model predicted well on the behavior under dynamic convective operation and on combined convective and microwave operation. It was expected that the suitability between model and baking system could be improved further by performing calibration experiments at higher temperature and various microwave power levels.  Abstrak  PERKIRAAN PARAMETER DALAM MODEL UNTUK PROSES BAKING ROTI. Kualitas produk roti sangat tergantung pada proses baking yang digunakan. Suatu model yang telah dikembangkan dengan metode kualitatif dan kuantitaif telah dikalibrasi dengan percobaan pada temperatur 200oC dan dengan kombinasi dengan mikrowave pada 100 Watt. Parameter-parameter model diestimasi dengan prosedur bertahap yaitu pertama, parameter pada model perpindahan masa dan panas, parameter pada model transformasi, dan

  7. Improved Differential Evolution Algorithm for Parameter Estimation to Improve the Production of Biochemical Pathway

    Directory of Open Access Journals (Sweden)

    Chuii Khim Chong

    2012-06-01

    Full Text Available This paper introduces an improved Differential Evolution algorithm (IDE which aims at improving its performance in estimating the relevant parameters for metabolic pathway data to simulate glycolysis pathway for yeast. Metabolic pathway data are expected to be of significant help in the development of efficient tools in kinetic modeling and parameter estimation platforms. Many computation algorithms face obstacles due to the noisy data and difficulty of the system in estimating myriad of parameters, and require longer computational time to estimate the relevant parameters. The proposed algorithm (IDE in this paper is a hybrid of a Differential Evolution algorithm (DE and a Kalman Filter (KF. The outcome of IDE is proven to be superior than Genetic Algorithm (GA and DE. The results of IDE from experiments show estimated optimal kinetic parameters values, shorter computation time and increased accuracy for simulated results compared with other estimation algorithms

  8. Estimation of real-time runway surface contamination using flight data recorder parameters

    Science.gov (United States)

    Curry, Donovan

    Within this research effort, the development of an analytic process for friction coefficient estimation is presented. Under static equilibrium, the sum of forces and moments acting on the aircraft, in the aircraft body coordinate system, while on the ground at any instant is equal to zero. Under this premise the longitudinal, lateral and normal forces due to landing are calculated along with the individual deceleration components existent when an aircraft comes to a rest during ground roll. In order to validate this hypothesis a six degree of freedom aircraft model had to be created and landing tests had to be simulated on different surfaces. The simulated aircraft model includes a high fidelity aerodynamic model, thrust model, landing gear model, friction model and antiskid model. Three main surfaces were defined in the friction model; dry, wet and snow/ice. Only the parameters recorded by an FDR are used directly from the aircraft model all others are estimated or known a priori. The estimation of unknown parameters is also presented in the research effort. With all needed parameters a comparison and validation with simulated and estimated data, under different runway conditions, is performed. Finally, this report presents results of a sensitivity analysis in order to provide a measure of reliability of the analytic estimation process. Linear and non-linear sensitivity analysis has been performed in order to quantify the level of uncertainty implicit in modeling estimated parameters and how they can affect the calculation of the instantaneous coefficient of friction. Using the approach of force and moment equilibrium about the CG at landing to reconstruct the instantaneous coefficient of friction appears to be a reasonably accurate estimate when compared to the simulated friction coefficient. This is also true when the FDR and estimated parameters are introduced to white noise and when crosswind is introduced to the simulation. After the linear analysis the

  9. Overview and benchmark analysis of fuel cell parameters estimation for energy management purposes

    Science.gov (United States)

    Kandidayeni, M.; Macias, A.; Amamou, A. A.; Boulon, L.; Kelouwani, S.; Chaoui, H.

    2018-03-01

    Proton exchange membrane fuel cells (PEMFCs) have become the center of attention for energy conversion in many areas such as automotive industry, where they confront a high dynamic behavior resulting in their characteristics variation. In order to ensure appropriate modeling of PEMFCs, accurate parameters estimation is in demand. However, parameter estimation of PEMFC models is highly challenging due to their multivariate, nonlinear, and complex essence. This paper comprehensively reviews PEMFC models parameters estimation methods with a specific view to online identification algorithms, which are considered as the basis of global energy management strategy design, to estimate the linear and nonlinear parameters of a PEMFC model in real time. In this respect, different PEMFC models with different categories and purposes are discussed first. Subsequently, a thorough investigation of PEMFC parameter estimation methods in the literature is conducted in terms of applicability. Three potential algorithms for online applications, Recursive Least Square (RLS), Kalman filter, and extended Kalman filter (EKF), which has escaped the attention in previous works, have been then utilized to identify the parameters of two well-known semi-empirical models in the literature, Squadrito et al. and Amphlett et al. Ultimately, the achieved results and future challenges are discussed.

  10. Optimization of vibratory welding process parameters using response surface methodology

    Energy Technology Data Exchange (ETDEWEB)

    Singh, Pravin Kumar; Kumar, S. Deepak; Patel, D.; Prasad, S. B. [National Institute of Technology Jamshedpur, Jharkhand (India)

    2017-05-15

    The current investigation was carried out to study the effect of vibratory welding technique on mechanical properties of 6 mm thick butt welded mild steel plates. A new concept of vibratory welding technique has been designed and developed which is capable to transfer vibrations, having resonance frequency of 300 Hz, into the molten weld pool before it solidifies during the Shielded metal arc welding (SMAW) process. The important process parameters of vibratory welding technique namely welding current, welding speed and frequency of the vibrations induced in molten weld pool were optimized using Taguchi’s analysis and Response surface methodology (RSM). The effect of process parameters on tensile strength and hardness were evaluated using optimization techniques. Applying RSM, the effect of vibratory welding parameters on tensile strength and hardness were obtained through two separate regression equations. Results showed that, the most influencing factor for the desired tensile strength and hardness is frequency at its resonance value, i.e. 300 Hz. The micro-hardness and microstructures of the vibratory welded joints were studied in detail and compared with those of conventional SMAW joints. Comparatively, uniform and fine grain structure has been found in vibratory welded joints.

  11. A new M w estimation parameter for use in earthquake early warning systems

    Science.gov (United States)

    Wang, Zijun; Zhao, Boming

    2018-01-01

    We propose a method that employs the squared displacement integral ( ID2) to estimate earthquake magnitudes in real time for use in earthquake early warning (EEW) systems. Moreover, using τ c and P d for comparison, we establish formulas for estimating the moment magnitudes of these three parameters based on the selected aftershocks (4.0 ≤ M s ≤ 6.5) of the 2008 Wenchuan earthquake. In this comparison, the proposed ID2 method displays the highest accuracy. Furthermore, we investigate the applicability of the initial parameters to large earthquakes by estimating the magnitude of the Wenchuan M s 8.0 mainshock using a 3-s time window. Although these three parameters all display problems with saturation, the proposed ID2 parameter is relatively accurate. The evolutionary estimation of ID2 as a function of the time window shows that the estimation equation established with ID2 Ref determined from the first 8-s of P wave data can be directly applicable to predicate the magnitudes of 8.0. Therefore, the proposed ID2 parameter provides a robust estimator of earthquake moment magnitudes and can be used for EEW purposes.

  12. Optimisation of wire-cut EDM process parameter by Grey-based response surface methodology

    Science.gov (United States)

    Kumar, Amit; Soota, Tarun; Kumar, Jitendra

    2018-03-01

    Wire electric discharge machining (WEDM) is one of the advanced machining processes. Response surface methodology coupled with Grey relation analysis method has been proposed and used to optimise the machining parameters of WEDM. A face centred cubic design is used for conducting experiments on high speed steel (HSS) M2 grade workpiece material. The regression model of significant factors such as pulse-on time, pulse-off time, peak current, and wire feed is considered for optimising the responses variables material removal rate (MRR), surface roughness and Kerf width. The optimal condition of the machining parameter was obtained using the Grey relation grade. ANOVA is applied to determine significance of the input parameters for optimising the Grey relation grade.

  13. Synchronous Generator Model Parameter Estimation Based on Noisy Dynamic Waveforms

    Science.gov (United States)

    Berhausen, Sebastian; Paszek, Stefan

    2016-01-01

    In recent years, there have occurred system failures in many power systems all over the world. They have resulted in a lack of power supply to a large number of recipients. To minimize the risk of occurrence of power failures, it is necessary to perform multivariate investigations, including simulations, of power system operating conditions. To conduct reliable simulations, the current base of parameters of the models of generating units, containing the models of synchronous generators, is necessary. In the paper, there is presented a method for parameter estimation of a synchronous generator nonlinear model based on the analysis of selected transient waveforms caused by introducing a disturbance (in the form of a pseudorandom signal) in the generator voltage regulation channel. The parameter estimation was performed by minimizing the objective function defined as a mean square error for deviations between the measurement waveforms and the waveforms calculated based on the generator mathematical model. A hybrid algorithm was used for the minimization of the objective function. In the paper, there is described a filter system used for filtering the noisy measurement waveforms. The calculation results of the model of a 44 kW synchronous generator installed on a laboratory stand of the Institute of Electrical Engineering and Computer Science of the Silesian University of Technology are also given. The presented estimation method can be successfully applied to parameter estimation of different models of high-power synchronous generators operating in a power system.

  14. Estimation of power feedback parameters of pulse reactor IBR-2M on transients

    International Nuclear Information System (INIS)

    Pepyolyshev, Yu.N.; Popov, A.K.

    2013-01-01

    Parameters of the IBR-2M reactor power feedback (PFB) on a model of the reactor dynamics by mathematical treatment of two registered transients are estimated. Frequency characteristics and the pulse transient characteristics corresponding to these PFB parameters are calculated. PFB parameters received thus can be considered as their express tentative estimation as real measurements in this case occupy no more than 30 minutes. Total PFB is negative at 1 and 2 MW. At the received estimations of PFB parameters in a self-regulation mode it is possible to consider the stability margins of the IBR-2M reactor satisfactory

  15. Estimation of single plane unbalance parameters of a rotor-bearing system using Kalman filtering based force estimation technique

    Science.gov (United States)

    Shrivastava, Akash; Mohanty, A. R.

    2018-03-01

    This paper proposes a model-based method to estimate single plane unbalance parameters (amplitude and phase angle) in a rotor using Kalman filter and recursive least square based input force estimation technique. Kalman filter based input force estimation technique requires state-space model and response measurements. A modified system equivalent reduction expansion process (SEREP) technique is employed to obtain a reduced-order model of the rotor system so that limited response measurements can be used. The method is demonstrated using numerical simulations on a rotor-disk-bearing system. Results are presented for different measurement sets including displacement, velocity, and rotational response. Effects of measurement noise level, filter parameters (process noise covariance and forgetting factor), and modeling error are also presented and it is observed that the unbalance parameter estimation is robust with respect to measurement noise.

  16. Choice of the parameters of the cusum algorithms for parameter estimation in the markov modulated poisson process

    OpenAIRE

    Burkatovskaya, Yuliya Borisovna; Kabanova, T.; Khaustov, Pavel Aleksandrovich

    2016-01-01

    CUSUM algorithm for controlling chain state switching in the Markov modulated Poissonprocess was investigated via simulation. Recommendations concerning the parameter choice were givensubject to characteristics of the process. Procedure of the process parameter estimation was described.

  17. An approach of parameter estimation for non-synchronous systems

    International Nuclear Information System (INIS)

    Xu Daolin; Lu Fangfang

    2005-01-01

    Synchronization-based parameter estimation is simple and effective but only available to synchronous systems. To come over this limitation, we propose a technique that the parameters of an unknown physical process (possibly a non-synchronous system) can be identified from a time series via a minimization procedure based on a synchronization control. The feasibility of this approach is illustrated in several chaotic systems

  18. Using Genetic Algorithm to Estimate Hydraulic Parameters of Unconfined Aquifers

    Directory of Open Access Journals (Sweden)

    Asghar Asghari Moghaddam

    2009-03-01

    Full Text Available Nowadays, optimization techniques such as Genetic Algorithms (GA have attracted wide attention among scientists for solving complicated engineering problems. In this article, pumping test data are used to assess the efficiency of GA in estimating unconfined aquifer parameters and a sensitivity analysis is carried out to propose an optimal arrangement of GA. For this purpose, hydraulic parameters of three sets of pumping test data are calculated by GA and they are compared with the results of graphical methods. The results indicate that the GA technique is an efficient, reliable, and powerful method for estimating the hydraulic parameters of unconfined aquifer and, further, that in cases of deficiency in pumping test data, it has a better performance than graphical methods.

  19. Probabilistic parameter estimation of activated sludge processes using Markov Chain Monte Carlo.

    Science.gov (United States)

    Sharifi, Soroosh; Murthy, Sudhir; Takács, Imre; Massoudieh, Arash

    2014-03-01

    One of the most important challenges in making activated sludge models (ASMs) applicable to design problems is identifying the values of its many stoichiometric and kinetic parameters. When wastewater characteristics data from full-scale biological treatment systems are used for parameter estimation, several sources of uncertainty, including uncertainty in measured data, external forcing (e.g. influent characteristics), and model structural errors influence the value of the estimated parameters. This paper presents a Bayesian hierarchical modeling framework for the probabilistic estimation of activated sludge process parameters. The method provides the joint probability density functions (JPDFs) of stoichiometric and kinetic parameters by updating prior information regarding the parameters obtained from expert knowledge and literature. The method also provides the posterior correlations between the parameters, as well as a measure of sensitivity of the different constituents with respect to the parameters. This information can be used to design experiments to provide higher information content regarding certain parameters. The method is illustrated using the ASM1 model to describe synthetically generated data from a hypothetical biological treatment system. The results indicate that data from full-scale systems can narrow down the ranges of some parameters substantially whereas the amount of information they provide regarding other parameters is small, due to either large correlations between some of the parameters or a lack of sensitivity with respect to the parameters. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. On Modal Parameter Estimates from Ambient Vibration Tests

    DEFF Research Database (Denmark)

    Agneni, A.; Brincker, Rune; Coppotelli, B.

    2004-01-01

    Modal parameter estimates from ambient vibration testing are turning into the preferred technique when one is interested in systems under actual loadings and operational conditions. Moreover, with this approach, expensive devices to excite the structure are not needed, since it can be adequately...

  1. Basic Earth's Parameters as estimated from VLBI observations

    Directory of Open Access Journals (Sweden)

    Ping Zhu

    2017-11-01

    Full Text Available The global Very Long Baseline Interferometry observation for measuring the Earth rotation's parameters was launched around 1970s. Since then the precision of the measurements is continuously improving by taking into account various instrumental and environmental effects. The MHB2000 nutation model was introduced in 2002, which is constructed based on a revised nutation series derived from 20 years VLBI observations (1980–1999. In this work, we firstly estimated the amplitudes of all nutation terms from the IERS-EOP-C04 VLBI global solutions w.r.t. IAU1980, then we further inferred the BEPs (Basic Earth's Parameters by fitting the major nutation terms. Meanwhile, the BEPs were obtained from the same nutation time series using a BI (Bayesian Inversion. The corrections to the precession rate and the estimated BEPs are in an agreement, independent of which methods have been applied.

  2. Revisiting Boltzmann learning: parameter estimation in Markov random fields

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Andersen, Lars Nonboe; Kjems, Ulrik

    1996-01-01

    This article presents a generalization of the Boltzmann machine that allows us to use the learning rule for a much wider class of maximum likelihood and maximum a posteriori problems, including both supervised and unsupervised learning. Furthermore, the approach allows us to discuss regularization...... and generalization in the context of Boltzmann machines. We provide an illustrative example concerning parameter estimation in an inhomogeneous Markov field. The regularized adaptation produces a parameter set that closely resembles the “teacher” parameters, hence, will produce segmentations that closely reproduce...

  3. Deep convolutional neural networks for estimating porous material parameters with ultrasound tomography

    Science.gov (United States)

    Lähivaara, Timo; Kärkkäinen, Leo; Huttunen, Janne M. J.; Hesthaven, Jan S.

    2018-02-01

    We study the feasibility of data based machine learning applied to ultrasound tomography to estimate water-saturated porous material parameters. In this work, the data to train the neural networks is simulated by solving wave propagation in coupled poroviscoelastic-viscoelastic-acoustic media. As the forward model, we consider a high-order discontinuous Galerkin method while deep convolutional neural networks are used to solve the parameter estimation problem. In the numerical experiment, we estimate the material porosity and tortuosity while the remaining parameters which are of less interest are successfully marginalized in the neural networks-based inversion. Computational examples confirms the feasibility and accuracy of this approach.

  4. Asymptotic analysis of the role of spatial sampling for covariance parameter estimation of Gaussian processes

    International Nuclear Information System (INIS)

    Bachoc, Francois

    2014-01-01

    Covariance parameter estimation of Gaussian processes is analyzed in an asymptotic framework. The spatial sampling is a randomly perturbed regular grid and its deviation from the perfect regular grid is controlled by a single scalar regularity parameter. Consistency and asymptotic normality are proved for the Maximum Likelihood and Cross Validation estimators of the covariance parameters. The asymptotic covariance matrices of the covariance parameter estimators are deterministic functions of the regularity parameter. By means of an exhaustive study of the asymptotic covariance matrices, it is shown that the estimation is improved when the regular grid is strongly perturbed. Hence, an asymptotic confirmation is given to the commonly admitted fact that using groups of observation points with small spacing is beneficial to covariance function estimation. Finally, the prediction error, using a consistent estimator of the covariance parameters, is analyzed in detail. (authors)

  5. Impact of relativistic effects on cosmological parameter estimation

    Science.gov (United States)

    Lorenz, Christiane S.; Alonso, David; Ferreira, Pedro G.

    2018-01-01

    Future surveys will access large volumes of space and hence very long wavelength fluctuations of the matter density and gravitational field. It has been argued that the set of secondary effects that affect the galaxy distribution, relativistic in nature, will bring new, complementary cosmological constraints. We study this claim in detail by focusing on a subset of wide-area future surveys: Stage-4 cosmic microwave background experiments and photometric redshift surveys. In particular, we look at the magnification lensing contribution to galaxy clustering and general-relativistic corrections to all observables. We quantify the amount of information encoded in these effects in terms of the tightening of the final cosmological constraints as well as the potential bias in inferred parameters associated with neglecting them. We do so for a wide range of cosmological parameters, covering neutrino masses, standard dark-energy parametrizations and scalar-tensor gravity theories. Our results show that, while the effect of lensing magnification to number counts does not contain a significant amount of information when galaxy clustering is combined with cosmic shear measurements, this contribution does play a significant role in biasing estimates on a host of parameter families if unaccounted for. Since the amplitude of the magnification term is controlled by the slope of the source number counts with apparent magnitude, s (z ), we also estimate the accuracy to which this quantity must be known to avoid systematic parameter biases, finding that future surveys will need to determine s (z ) to the ˜5 %- 10 % level. On the contrary, large-scale general-relativistic corrections are irrelevant both in terms of information content and parameter bias for most cosmological parameters but significant for the level of primordial non-Gaussianity.

  6. Nonlinear Adaptive Descriptor Observer for the Joint States and Parameters Estimation

    KAUST Repository

    2016-08-29

    In this note, the joint state and parameters estimation problem for nonlinear multi-input multi-output descriptor systems is considered. Asymptotic convergence of the adaptive descriptor observer is established by a sufficient set of linear matrix inequalities for the noise-free systems. The noise corrupted systems are also considered and it is shown that the state and parameters estimation errors are bounded for bounded noises. In addition, if the noises are bounded and have zero mean, then the estimation errors asymptotically converge to zero in the mean. The performance of the proposed adaptive observer is illustrated by a numerical example.

  7. Nonlinear Adaptive Descriptor Observer for the Joint States and Parameters Estimation

    KAUST Repository

    Unknown author

    2016-01-01

    In this note, the joint state and parameters estimation problem for nonlinear multi-input multi-output descriptor systems is considered. Asymptotic convergence of the adaptive descriptor observer is established by a sufficient set of linear matrix inequalities for the noise-free systems. The noise corrupted systems are also considered and it is shown that the state and parameters estimation errors are bounded for bounded noises. In addition, if the noises are bounded and have zero mean, then the estimation errors asymptotically converge to zero in the mean. The performance of the proposed adaptive observer is illustrated by a numerical example.

  8. Bayesian Parameter Estimation via Filtering and Functional Approximations

    KAUST Repository

    Matthies, Hermann G.

    2016-11-25

    The inverse problem of determining parameters in a model by comparing some output of the model with observations is addressed. This is a description for what hat to be done to use the Gauss-Markov-Kalman filter for the Bayesian estimation and updating of parameters in a computational model. This is a filter acting on random variables, and while its Monte Carlo variant --- the Ensemble Kalman Filter (EnKF) --- is fairly straightforward, we subsequently only sketch its implementation with the help of functional representations.

  9. Bayesian Parameter Estimation via Filtering and Functional Approximations

    KAUST Repository

    Matthies, Hermann G.; Litvinenko, Alexander; Rosic, Bojana V.; Zander, Elmar

    2016-01-01

    The inverse problem of determining parameters in a model by comparing some output of the model with observations is addressed. This is a description for what hat to be done to use the Gauss-Markov-Kalman filter for the Bayesian estimation and updating of parameters in a computational model. This is a filter acting on random variables, and while its Monte Carlo variant --- the Ensemble Kalman Filter (EnKF) --- is fairly straightforward, we subsequently only sketch its implementation with the help of functional representations.

  10. Comparison of Classical and Robust Estimates of Threshold Auto-regression Parameters

    Directory of Open Access Journals (Sweden)

    V. B. Goryainov

    2017-01-01

    Full Text Available The study object is the first-order threshold auto-regression model with a single zero-located threshold. The model describes a stochastic temporal series with discrete time by means of a piecewise linear equation consisting of two linear classical first-order autoregressive equations. One of these equations is used to calculate a running value of the temporal series. A control variable that determines the choice between these two equations is the sign of the previous value of the same series.The first-order threshold autoregressive model with a single threshold depends on two real parameters that coincide with the coefficients of the piecewise linear threshold equation. These parameters are assumed to be unknown. The paper studies an estimate of the least squares, an estimate the least modules, and the M-estimates of these parameters. The aim of the paper is a comparative study of the accuracy of these estimates for the main probabilistic distributions of the updating process of the threshold autoregressive equation. These probability distributions were normal, contaminated normal, logistic, double-exponential distributions, a Student's distribution with different number of degrees of freedom, and a Cauchy distribution.As a measure of the accuracy of each estimate, was chosen its variance to measure the scattering of the estimate around the estimated parameter. An estimate with smaller variance made from the two estimates was considered to be the best. The variance was estimated by computer simulation. To estimate the smallest modules an iterative weighted least-squares method was used and the M-estimates were done by the method of a deformable polyhedron (the Nelder-Mead method. To calculate the least squares estimate, an explicit analytic expression was used.It turned out that the estimation of least squares is best only with the normal distribution of the updating process. For the logistic distribution and the Student's distribution with the

  11. Response-based estimation of sea state parameters - Influence of filtering

    DEFF Research Database (Denmark)

    Nielsen, Ulrik Dam

    2007-01-01

    Reliable estimation of the on-site sea state parameters is essential to decision support systems for safe navigation of ships. The wave spectrum can be estimated from procedures based on measured ship responses. The paper deals with two procedures—Bayesian Modelling and Parametric Modelling...

  12. Nonlinear Parameter Estimation in Microbiological Degradation Systems and Statistic Test for Common Estimation

    DEFF Research Database (Denmark)

    Sommer, Helle Mølgaard; Holst, Helle; Spliid, Henrik

    1995-01-01

    Three identical microbiological experiments were carried out and analysed in order to examine the variability of the parameter estimates. The microbiological system consisted of a substrate (toluene) and a biomass (pure culture) mixed together in an aquifer medium. The degradation of the substrate...

  13. Experimental methodology for obtaining sound absorption coefficients

    Directory of Open Access Journals (Sweden)

    Carlos A. Macía M

    2011-07-01

    Full Text Available Objective: the authors propose a new methodology for estimating sound absorption coefficients using genetic algorithms. Methodology: sound waves are generated and conducted along a rectangular silencer. The waves are then attenuated by the absorbing material covering the silencer’s walls. The attenuated sound pressure level is used in a genetic algorithm-based search to find the parameters of the proposed attenuation expressions that include geometric factors, the wavelength and the absorption coefficient. Results: a variety of adjusted mathematical models were found that make it possible to estimate the absorption coefficients based on the characteristics of a rectangular silencer used for measuring the attenuation of the noise that passes through it. Conclusions: this methodology makes it possible to obtain the absorption coefficients of new materials in a cheap and simple manner. Although these coefficients might be slightly different from those obtained through other methodologies, they provide solutions within the engineering accuracy ranges that are used for designing noise control systems.

  14. Preliminary Estimation of Kappa Parameter in Croatia

    Science.gov (United States)

    Stanko, Davor; Markušić, Snježana; Ivančić, Ines; Mario, Gazdek; Gülerce, Zeynep

    2017-12-01

    Spectral parameter kappa κ is used to describe spectral amplitude decay “crash syndrome” at high frequencies. The purpose of this research is to estimate spectral parameter kappa for the first time in Croatia based on small and moderate earthquakes. Recordings of local earthquakes with magnitudes higher than 3, epicentre distances less than 150 km, and focal depths less than 30 km from seismological stations in Croatia are used. The value of kappa was estimated from the acceleration amplitude spectrum of shear waves from the slope of the high-frequency part where the spectrum starts to decay rapidly to a noise floor. Kappa models as a function of a site and distance were derived from a standard linear regression of kappa-distance dependence. Site kappa was determined from the extrapolation of the regression line to a zero distance. The preliminary results of site kappa across Croatia are promising. In this research, these results are compared with local site condition parameters for each station, e.g. shear wave velocity in the upper 30 m from geophysical measurements and with existing global shear wave velocity - site kappa values. Spatial distribution of individual kappa’s is compared with the azimuthal distribution of earthquake epicentres. These results are significant for a couple of reasons: to extend the knowledge of the attenuation of near-surface crust layers of the Dinarides and to provide additional information on the local earthquake parameters for updating seismic hazard maps of studied area. Site kappa can be used in the re-creation, and re-calibration of attenuation of peak horizontal and/or vertical acceleration in the Dinarides area since information on the local site conditions were not included in the previous studies.

  15. A framework for scalable parameter estimation of gene circuit models using structural information

    KAUST Repository

    Kuwahara, Hiroyuki

    2013-06-21

    Motivation: Systematic and scalable parameter estimation is a key to construct complex gene regulatory models and to ultimately facilitate an integrative systems biology approach to quantitatively understand the molecular mechanisms underpinning gene regulation. Results: Here, we report a novel framework for efficient and scalable parameter estimation that focuses specifically on modeling of gene circuits. Exploiting the structure commonly found in gene circuit models, this framework decomposes a system of coupled rate equations into individual ones and efficiently integrates them separately to reconstruct the mean time evolution of the gene products. The accuracy of the parameter estimates is refined by iteratively increasing the accuracy of numerical integration using the model structure. As a case study, we applied our framework to four gene circuit models with complex dynamics based on three synthetic datasets and one time series microarray data set. We compared our framework to three state-of-the-art parameter estimation methods and found that our approach consistently generated higher quality parameter solutions efficiently. Although many general-purpose parameter estimation methods have been applied for modeling of gene circuits, our results suggest that the use of more tailored approaches to use domain-specific information may be a key to reverse engineering of complex biological systems. The Author 2013.

  16. A framework for scalable parameter estimation of gene circuit models using structural information

    KAUST Repository

    Kuwahara, Hiroyuki; Fan, Ming; Wang, Suojin; Gao, Xin

    2013-01-01

    Motivation: Systematic and scalable parameter estimation is a key to construct complex gene regulatory models and to ultimately facilitate an integrative systems biology approach to quantitatively understand the molecular mechanisms underpinning gene regulation. Results: Here, we report a novel framework for efficient and scalable parameter estimation that focuses specifically on modeling of gene circuits. Exploiting the structure commonly found in gene circuit models, this framework decomposes a system of coupled rate equations into individual ones and efficiently integrates them separately to reconstruct the mean time evolution of the gene products. The accuracy of the parameter estimates is refined by iteratively increasing the accuracy of numerical integration using the model structure. As a case study, we applied our framework to four gene circuit models with complex dynamics based on three synthetic datasets and one time series microarray data set. We compared our framework to three state-of-the-art parameter estimation methods and found that our approach consistently generated higher quality parameter solutions efficiently. Although many general-purpose parameter estimation methods have been applied for modeling of gene circuits, our results suggest that the use of more tailored approaches to use domain-specific information may be a key to reverse engineering of complex biological systems. The Author 2013.

  17. Optimal Tuner Selection for Kalman Filter-Based Aircraft Engine Performance Estimation

    Science.gov (United States)

    Simon, Donald L.; Garg, Sanjay

    2010-01-01

    A linear point design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. This paper derives theoretical Kalman filter estimation error bias and variance values at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the conventional approach of tuner selection. Experimental simulation results are found to be in agreement with theoretical predictions. The new methodology is shown to yield a significant improvement in on-line engine performance estimation accuracy

  18. Online state of charge and model parameter co-estimation based on a novel multi-timescale estimator for vanadium redox flow battery

    International Nuclear Information System (INIS)

    Wei, Zhongbao; Lim, Tuti Mariana; Skyllas-Kazacos, Maria; Wai, Nyunt; Tseng, King Jet

    2016-01-01

    Highlights: • Battery model parameters and SOC co-estimation is investigated. • The model parameters and OCV are decoupled and estimated independently. • Multiple timescales are adopted to improve precision and stability. • SOC is online estimated without using the open-circuit cell. • The method is robust to aging levels, flow rates, and battery chemistries. - Abstract: A key function of battery management system (BMS) is to provide accurate information of the state of charge (SOC) in real time, and this depends directly on the precise model parameterization. In this paper, a novel multi-timescale estimator is proposed to estimate the model parameters and SOC for vanadium redox flow battery (VRB) in real time. The model parameters and OCV are decoupled and estimated independently, effectively avoiding the possibility of cross interference between them. The analysis of model sensitivity, stability, and precision suggests the necessity of adopting different timescales for each estimator independently. Experiments are conducted to assess the performance of the proposed method. Results reveal that the model parameters are online adapted accurately thus the periodical calibration on them can be avoided. The online estimated terminal voltage and SOC are both benchmarked with the reference values. The proposed multi-timescale estimator has the merits of fast convergence, high precision, and good robustness against the initialization uncertainty, aging states, flow rates, and also battery chemistries.

  19. Optimisation of process parameters on thin shell part using response surface methodology (RSM)

    Science.gov (United States)

    Faiz, J. M.; Shayfull, Z.; Nasir, S. M.; Fathullah, M.; Rashidi, M. M.

    2017-09-01

    This study is carried out to focus on optimisation of process parameters by simulation using Autodesk Moldflow Insight (AMI) software. The process parameters are taken as the input in order to analyse the warpage value which is the output in this study. There are some significant parameters that have been used which are melt temperature, mould temperature, packing pressure, and cooling time. A plastic part made of Polypropylene (PP) has been selected as the study part. Optimisation of process parameters is applied in Design Expert software with the aim to minimise the obtained warpage value. Response Surface Methodology (RSM) has been applied in this study together with Analysis of Variance (ANOVA) in order to investigate the interactions between parameters that are significant to the warpage value. Thus, the optimised warpage value can be obtained using the model designed using RSM due to its minimum error value. This study comes out with the warpage value improved by using RSM.

  20. Estimation of Adjoint-Weighted Kinetics Parameters in Monte Carlo Wieland Calculations

    International Nuclear Information System (INIS)

    Choi, Sung Hoon; Shim, Hyung Jin

    2013-01-01

    The effective delayed neutron fraction, β eff , and the prompt neutron generation time, Λ, in the point kinetics equation are weighted by the adjoint flux to improve the accuracy of the reactivity estimate. Recently the Monte Carlo (MC) kinetics parameter estimation methods by using the self-consistent adjoint flux calculated in the MC forward simulations have been developed and successfully applied for the research reactor analyses. However these adjoint estimation methods based on the cycle-by-cycle genealogical table require a huge memory size to store the pedigree hierarchy. In this paper, we present a new adjoint estimation in which the pedigree of a single history is utilized by applying the MC Wielandt method. The effectiveness of the new method is demonstrated in the kinetics parameter estimations for infinite homogeneous two-group problems and the Godiva critical facility

  1. Methodologies for estimating toxicity of shoreline cleaning agents in the field

    International Nuclear Information System (INIS)

    Clayton, J.R.Jr.; Stransky, B.C.; Schwartz, M.J.; Snyder, B.J.; Lees, D.C.; Michel, J.; Reilly, T.J.

    1996-01-01

    Four methodologies that could be used in a portable kit to estimate quantitative and qualitative information regarding the toxicity of oil spill cleaning agents, were evaluated. Onshore cleaning agents (SCAs) are meant to enhance the removal of treated oil from shoreline surfaces, and should not increase adverse impacts to organisms in a treated area. Tests, therefore, should be performed with resident organisms likely to be impacted during the use of SCAs. The four methodologies were Microtox T M, fertilization success for echinoderm eggs, byssal thread attachment in mussels, and righting and water-escaping ability in periwinkle snails. Site specific variations in physical and chemical properties of the oil and SCAs were considered. Results were provided, showing all combinations of oils and SCAs. Evaluation showed that all four methodologies provided sufficient information to assist a user in deciding whether or not the use of an SCA was warranted. 33 refs., 7 tabs., 11 figs

  2. ADAPTIVE PARAMETER ESTIMATION OF PERSON RECOGNITION MODEL IN A STOCHASTIC HUMAN TRACKING PROCESS

    OpenAIRE

    W. Nakanishi; T. Fuse; T. Ishikawa

    2015-01-01

    This paper aims at an estimation of parameters of person recognition models using a sequential Bayesian filtering method. In many human tracking method, any parameters of models used for recognize the same person in successive frames are usually set in advance of human tracking process. In real situation these parameters may change according to situation of observation and difficulty level of human position prediction. Thus in this paper we formulate an adaptive parameter estimation ...

  3. A quasi-sequential parameter estimation for nonlinear dynamic systems based on multiple data profiles

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Chao [FuZhou University, FuZhou (China); Vu, Quoc Dong; Li, Pu [Ilmenau University of Technology, Ilmenau (Germany)

    2013-02-15

    A three-stage computation framework for solving parameter estimation problems for dynamic systems with multiple data profiles is developed. The dynamic parameter estimation problem is transformed into a nonlinear programming (NLP) problem by using collocation on finite elements. The model parameters to be estimated are treated in the upper stage by solving an NLP problem. The middle stage consists of multiple NLP problems nested in the upper stage, representing the data reconciliation step for each data profile. We use the quasi-sequential dynamic optimization approach to solve these problems. In the lower stage, the state variables and their gradients are evaluated through ntegrating the model equations. Since the second-order derivatives are not required in the computation framework this proposed method will be efficient for solving nonlinear dynamic parameter estimation problems. The computational results obtained on a parameter estimation problem for two CSTR models demonstrate the effectiveness of the proposed approach.

  4. A quasi-sequential parameter estimation for nonlinear dynamic systems based on multiple data profiles

    International Nuclear Information System (INIS)

    Zhao, Chao; Vu, Quoc Dong; Li, Pu

    2013-01-01

    A three-stage computation framework for solving parameter estimation problems for dynamic systems with multiple data profiles is developed. The dynamic parameter estimation problem is transformed into a nonlinear programming (NLP) problem by using collocation on finite elements. The model parameters to be estimated are treated in the upper stage by solving an NLP problem. The middle stage consists of multiple NLP problems nested in the upper stage, representing the data reconciliation step for each data profile. We use the quasi-sequential dynamic optimization approach to solve these problems. In the lower stage, the state variables and their gradients are evaluated through ntegrating the model equations. Since the second-order derivatives are not required in the computation framework this proposed method will be efficient for solving nonlinear dynamic parameter estimation problems. The computational results obtained on a parameter estimation problem for two CSTR models demonstrate the effectiveness of the proposed approach

  5. Statistical methods of parameter estimation for deterministically chaotic time series

    Science.gov (United States)

    Pisarenko, V. F.; Sornette, D.

    2004-03-01

    We discuss the possibility of applying some standard statistical methods (the least-square method, the maximum likelihood method, and the method of statistical moments for estimation of parameters) to deterministically chaotic low-dimensional dynamic system (the logistic map) containing an observational noise. A “segmentation fitting” maximum likelihood (ML) method is suggested to estimate the structural parameter of the logistic map along with the initial value x1 considered as an additional unknown parameter. The segmentation fitting method, called “piece-wise” ML, is similar in spirit but simpler and has smaller bias than the “multiple shooting” previously proposed. Comparisons with different previously proposed techniques on simulated numerical examples give favorable results (at least, for the investigated combinations of sample size N and noise level). Besides, unlike some suggested techniques, our method does not require the a priori knowledge of the noise variance. We also clarify the nature of the inherent difficulties in the statistical analysis of deterministically chaotic time series and the status of previously proposed Bayesian approaches. We note the trade off between the need of using a large number of data points in the ML analysis to decrease the bias (to guarantee consistency of the estimation) and the unstable nature of dynamical trajectories with exponentially fast loss of memory of the initial condition. The method of statistical moments for the estimation of the parameter of the logistic map is discussed. This method seems to be the unique method whose consistency for deterministically chaotic time series is proved so far theoretically (not only numerically).

  6. Data Based Parameter Estimation Method for Circular-scanning SAR Imaging

    Directory of Open Access Journals (Sweden)

    Chen Gong-bo

    2013-06-01

    Full Text Available The circular-scanning Synthetic Aperture Radar (SAR is a novel working mode and its image quality is closely related to the accuracy of the imaging parameters, especially considering the inaccuracy of the real speed of the motion. According to the characteristics of the circular-scanning mode, a new data based method for estimating the velocities of the radar platform and the scanning-angle of the radar antenna is proposed in this paper. By referring to the basic conception of the Doppler navigation technique, the mathematic model and formulations for the parameter estimation are firstly improved. The optimal parameter approximation based on the least square criterion is then realized in solving those equations derived from the data processing. The simulation results verified the validity of the proposed scheme.

  7. On Using Exponential Parameter Estimators with an Adaptive Controller

    Science.gov (United States)

    Patre, Parag; Joshi, Suresh M.

    2011-01-01

    Typical adaptive controllers are restricted to using a specific update law to generate parameter estimates. This paper investigates the possibility of using any exponential parameter estimator with an adaptive controller such that the system tracks a desired trajectory. The goal is to provide flexibility in choosing any update law suitable for a given application. The development relies on a previously developed concept of controller/update law modularity in the adaptive control literature, and the use of a converse Lyapunov-like theorem. Stability analysis is presented to derive gain conditions under which this is possible, and inferences are made about the tracking error performance. The development is based on a class of Euler-Lagrange systems that are used to model various engineering systems including space robots and manipulators.

  8. The estimation of parameter compaction values for pavement subgrade stabilized with lime

    Science.gov (United States)

    Lubis, A. S.; Muis, Z. A.; Simbolon, C. A.

    2018-02-01

    The type of soil material, field control, maintenance and availability of funds are several factors that must be considered in compaction of the pavement subgrade. In determining the compaction parameters in laboratory desperately requires considerable materials, time and funds, and reliable laboratory operators. If the result of soil classification values can be used to estimate the compaction parameters of a subgrade material, so it would save time, energy, materials and cost on the execution of this work. This is also a clarification (cross check) of the work that has been done by technicians in the laboratory. The study aims to estimate the compaction parameter values ie. maximum dry unit weight (γdmax) and optimum water content (Wopt) of the soil subgrade that stabilized with lime. The tests that conducted in the laboratory of soil mechanics were to determine the index properties (Fines and Liquid Limit/LL) and Standard Compaction Test. Soil samples that have Plasticity Index (PI) > 10% were made with additional 3% lime for 30 samples. By using the Goswami equation, the compaction parameter values can be estimated by equation γd max # = -0,1686 Log G + 1,8434 and Wopt # = 2,9178 log G + 17,086. From the validation calculation, there was a significant positive correlation between the compaction parameter values laboratory and the compaction parameter values estimated, with a 95% confidence interval as a strong relationship.

  9. Statistical Inference for Data Adaptive Target Parameters.

    Science.gov (United States)

    Hubbard, Alan E; Kherad-Pajouh, Sara; van der Laan, Mark J

    2016-05-01

    Consider one observes n i.i.d. copies of a random variable with a probability distribution that is known to be an element of a particular statistical model. In order to define our statistical target we partition the sample in V equal size sub-samples, and use this partitioning to define V splits in an estimation sample (one of the V subsamples) and corresponding complementary parameter-generating sample. For each of the V parameter-generating samples, we apply an algorithm that maps the sample to a statistical target parameter. We define our sample-split data adaptive statistical target parameter as the average of these V-sample specific target parameters. We present an estimator (and corresponding central limit theorem) of this type of data adaptive target parameter. This general methodology for generating data adaptive target parameters is demonstrated with a number of practical examples that highlight new opportunities for statistical learning from data. This new framework provides a rigorous statistical methodology for both exploratory and confirmatory analysis within the same data. Given that more research is becoming "data-driven", the theory developed within this paper provides a new impetus for a greater involvement of statistical inference into problems that are being increasingly addressed by clever, yet ad hoc pattern finding methods. To suggest such potential, and to verify the predictions of the theory, extensive simulation studies, along with a data analysis based on adaptively determined intervention rules are shown and give insight into how to structure such an approach. The results show that the data adaptive target parameter approach provides a general framework and resulting methodology for data-driven science.

  10. Estimation of riverbank soil erodibility parameters using genetic ...

    Indian Academy of Sciences (India)

    Tapas Karmaker

    2017-11-07

    Nov 7, 2017 ... process. Therefore, this is a study to verify the applicability of inverse parameter ... successful modelling of the riverbank erosion, precise estimation of ..... For this simulation, about 40 iterations are found to attain the convergence. ..... rithm for function optimization: a Matlab implementation. NCSU-IE TR ...

  11. Determination of power system component parameters using nonlinear dead beat estimation method

    Science.gov (United States)

    Kolluru, Lakshmi

    Power systems are considered the most complex man-made wonders in existence today. In order to effectively supply the ever increasing demands of the consumers, power systems are required to remain stable at all times. Stability and monitoring of these complex systems are achieved by strategically placed computerized control centers. State and parameter estimation is an integral part of these facilities, as they deal with identifying the unknown states and/or parameters of the systems. Advancements in measurement technologies and the introduction of phasor measurement units (PMU) provide detailed and dynamic information of all measurements. Accurate availability of dynamic measurements provides engineers the opportunity to expand and explore various possibilities in power system dynamic analysis/control. This thesis discusses the development of a parameter determination algorithm for nonlinear power systems, using dynamic data obtained from local measurements. The proposed algorithm was developed by observing the dead beat estimator used in state space estimation of linear systems. The dead beat estimator is considered to be very effective as it is capable of obtaining the required results in a fixed number of steps. The number of steps required is related to the order of the system and the number of parameters to be estimated. The proposed algorithm uses the idea of dead beat estimator and nonlinear finite difference methods to create an algorithm which is user friendly and can determine the parameters fairly accurately and effectively. The proposed algorithm is based on a deterministic approach, which uses dynamic data and mathematical models of power system components to determine the unknown parameters. The effectiveness of the algorithm is tested by implementing it to identify the unknown parameters of a synchronous machine. MATLAB environment is used to create three test cases for dynamic analysis of the system with assumed known parameters. Faults are

  12. Stable Parameter Estimation for Autoregressive Equations with Random Coefficients

    Directory of Open Access Journals (Sweden)

    V. B. Goryainov

    2014-01-01

    Full Text Available In recent yearsthere has been a growing interest in non-linear time series models. They are more flexible than traditional linear models and allow more adequate description of real data. Among these models a autoregressive model with random coefficients plays an important role. It is widely used in various fields of science and technology, for example, in physics, biology, economics and finance. The model parameters are the mean values of autoregressive coefficients. Their evaluation is the main task of model identification. The basic method of estimation is still the least squares method, which gives good results for Gaussian time series, but it is quite sensitive to even small disturbancesin the assumption of Gaussian observations. In this paper we propose estimates, which generalize the least squares estimate in the sense that the quadratic objective function is replaced by an arbitrary convex and even function. Reasonable choice of objective function allows you to keep the benefits of the least squares estimate and eliminate its shortcomings. In particular, you can make it so that they will be almost as effective as the least squares estimate in the Gaussian case, but almost never loose in accuracy with small deviations of the probability distribution of the observations from the Gaussian distribution.The main result is the proof of consistency and asymptotic normality of the proposed estimates in the particular case of the one-parameter model describing the stationary process with finite variance. Another important result is the finding of the asymptotic relative efficiency of the proposed estimates in relation to the least squares estimate. This allows you to compare the two estimates, depending on the probability distribution of innovation process and of autoregressive coefficients. The results can be used to identify an autoregressive process, especially with nonGaussian nature, and/or of autoregressive processes observed with gross

  13. Robust seismicity forecasting based on Bayesian parameter estimation for epidemiological spatio-temporal aftershock clustering models.

    Science.gov (United States)

    Ebrahimian, Hossein; Jalayer, Fatemeh

    2017-08-29

    In the immediate aftermath of a strong earthquake and in the presence of an ongoing aftershock sequence, scientific advisories in terms of seismicity forecasts play quite a crucial role in emergency decision-making and risk mitigation. Epidemic Type Aftershock Sequence (ETAS) models are frequently used for forecasting the spatio-temporal evolution of seismicity in the short-term. We propose robust forecasting of seismicity based on ETAS model, by exploiting the link between Bayesian inference and Markov Chain Monte Carlo Simulation. The methodology considers the uncertainty not only in the model parameters, conditioned on the available catalogue of events occurred before the forecasting interval, but also the uncertainty in the sequence of events that are going to happen during the forecasting interval. We demonstrate the methodology by retrospective early forecasting of seismicity associated with the 2016 Amatrice seismic sequence activities in central Italy. We provide robust spatio-temporal short-term seismicity forecasts with various time intervals in the first few days elapsed after each of the three main events within the sequence, which can predict the seismicity within plus/minus two standard deviations from the mean estimate within the few hours elapsed after the main event.

  14. Application of genetic algorithms for parameter estimation in liquid chromatography

    International Nuclear Information System (INIS)

    Hernandez Torres, Reynier; Irizar Mesa, Mirtha; Tavares Camara, Leoncio Diogenes

    2012-01-01

    In chromatography, complex inverse problems related to the parameters estimation and process optimization are presented. Metaheuristics methods are known as general purpose approximated algorithms which seek and hopefully find good solutions at a reasonable computational cost. These methods are iterative process to perform a robust search of a solution space. Genetic algorithms are optimization techniques based on the principles of genetics and natural selection. They have demonstrated very good performance as global optimizers in many types of applications, including inverse problems. In this work, the effectiveness of genetic algorithms is investigated to estimate parameters in liquid chromatography

  15. HIV Model Parameter Estimates from Interruption Trial Data including Drug Efficacy and Reservoir Dynamics

    Science.gov (United States)

    Luo, Rutao; Piovoso, Michael J.; Martinez-Picado, Javier; Zurakowski, Ryan

    2012-01-01

    Mathematical models based on ordinary differential equations (ODE) have had significant impact on understanding HIV disease dynamics and optimizing patient treatment. A model that characterizes the essential disease dynamics can be used for prediction only if the model parameters are identifiable from clinical data. Most previous parameter identification studies for HIV have used sparsely sampled data from the decay phase following the introduction of therapy. In this paper, model parameters are identified from frequently sampled viral-load data taken from ten patients enrolled in the previously published AutoVac HAART interruption study, providing between 69 and 114 viral load measurements from 3–5 phases of viral decay and rebound for each patient. This dataset is considerably larger than those used in previously published parameter estimation studies. Furthermore, the measurements come from two separate experimental conditions, which allows for the direct estimation of drug efficacy and reservoir contribution rates, two parameters that cannot be identified from decay-phase data alone. A Markov-Chain Monte-Carlo method is used to estimate the model parameter values, with initial estimates obtained using nonlinear least-squares methods. The posterior distributions of the parameter estimates are reported and compared for all patients. PMID:22815727

  16. PARAMETER ESTIMATION OF THE HYBRID CENSORED LOMAX DISTRIBUTION

    Directory of Open Access Journals (Sweden)

    Samir Kamel Ashour

    2010-12-01

    Full Text Available Survival analysis is used in various fields for analyzing data involving the duration between two events. It is also known as event history analysis, lifetime data analysis, reliability analysis or time to event analysis. One of the difficulties which arise in this area is the presence of censored data. The lifetime of an individual is censored when it cannot be exactly measured but partial information is available. Different circumstances can produce different types of censoring. The two most common censoring schemes used in life testing experiments are Type-I and Type-II censoring schemes. Hybrid censoring scheme is mixture of Type-I and Type-II censoring scheme. In this paper we consider the estimation of parameters of Lomax distribution based on hybrid censored data. The parameters are estimated by the maximum likelihood and Bayesian methods. The Fisher information matrix has been obtained and it can be used for constructing asymptotic confidence intervals.

  17. Parameter estimation and hypothesis testing in linear models

    CERN Document Server

    Koch, Karl-Rudolf

    1999-01-01

    The necessity to publish the second edition of this book arose when its third German edition had just been published. This second English edition is there­ fore a translation of the third German edition of Parameter Estimation and Hypothesis Testing in Linear Models, published in 1997. It differs from the first English edition by the addition of a new chapter on robust estimation of parameters and the deletion of the section on discriminant analysis, which has been more completely dealt with by the author in the book Bayesian In­ ference with Geodetic Applications, Springer-Verlag, Berlin Heidelberg New York, 1990. Smaller additions and deletions have been incorporated, to im­ prove the text, to point out new developments or to eliminate errors which became apparent. A few examples have been also added. I thank Springer-Verlag for publishing this second edition and for the assistance in checking the translation, although the responsibility of errors remains with the author. I also want to express my thanks...

  18. Methodology for identifying parameters for the TRNSYS model Type 210 - wood pellet stoves and boilers

    Energy Technology Data Exchange (ETDEWEB)

    Persson, Tomas; Fiedler, Frank; Nordlander, Svante

    2006-05-15

    This report describes a method how to perform measurements on boilers and stoves and how to identify parameters from the measurements for the boiler/stove-model TRNSYS Type 210. The model can be used for detailed annual system simulations using TRNSYS. Experience from measurements on three different pellet stoves and four boilers were used to develop this methodology. Recommendations for the set up of measurements are given and the required combustion theory for the data evaluation and data preparation are given. The data evaluation showed that the uncertainties are quite large for the measured flue gas flow rate and for boilers and stoves with high fraction of energy going to the water jacket also the calculated heat rate to the room may have large uncertainties. A methodology for the parameter identification process and identified parameters for two different stoves and three boilers are given. Finally the identified models are compared with measured data showing that the model generally agreed well with measured data during both stationary and dynamic conditions.

  19. Correcting the bias of empirical frequency parameter estimators in codon models.

    Directory of Open Access Journals (Sweden)

    Sergei Kosakovsky Pond

    2010-07-01

    Full Text Available Markov models of codon substitution are powerful inferential tools for studying biological processes such as natural selection and preferences in amino acid substitution. The equilibrium character distributions of these models are almost always estimated using nucleotide frequencies observed in a sequence alignment, primarily as a matter of historical convention. In this note, we demonstrate that a popular class of such estimators are biased, and that this bias has an adverse effect on goodness of fit and estimates of substitution rates. We propose a "corrected" empirical estimator that begins with observed nucleotide counts, but accounts for the nucleotide composition of stop codons. We show via simulation that the corrected estimates outperform the de facto standard estimates not just by providing better estimates of the frequencies themselves, but also by leading to improved estimation of other parameters in the evolutionary models. On a curated collection of sequence alignments, our estimators show a significant improvement in goodness of fit compared to the approach. Maximum likelihood estimation of the frequency parameters appears to be warranted in many cases, albeit at a greater computational cost. Our results demonstrate that there is little justification, either statistical or computational, for continued use of the -style estimators.

  20. Bridging the gaps between non-invasive genetic sampling and population parameter estimation

    Science.gov (United States)

    Francesca Marucco; Luigi Boitani; Daniel H. Pletscher; Michael K. Schwartz

    2011-01-01

    Reliable estimates of population parameters are necessary for effective management and conservation actions. The use of genetic data for capture­recapture (CR) analyses has become an important tool to estimate population parameters for elusive species. Strong emphasis has been placed on the genetic analysis of non-invasive samples, or on the CR analysis; however,...

  1. Retrospective forecast of ETAS model with daily parameters estimate

    Science.gov (United States)

    Falcone, Giuseppe; Murru, Maura; Console, Rodolfo; Marzocchi, Warner; Zhuang, Jiancang

    2016-04-01

    We present a retrospective ETAS (Epidemic Type of Aftershock Sequence) model based on the daily updating of free parameters during the background, the learning and the test phase of a seismic sequence. The idea was born after the 2011 Tohoku-Oki earthquake. The CSEP (Collaboratory for the Study of Earthquake Predictability) Center in Japan provided an appropriate testing benchmark for the five 1-day submitted models. Of all the models, only one was able to successfully predict the number of events that really happened. This result was verified using both the real time and the revised catalogs. The main cause of the failure was in the underestimation of the forecasted events, due to model parameters maintained fixed during the test. Moreover, the absence in the learning catalog of an event similar to the magnitude of the mainshock (M9.0), which drastically changed the seismicity in the area, made the learning parameters not suitable to describe the real seismicity. As an example of this methodological development we show the evolution of the model parameters during the last two strong seismic sequences in Italy: the 2009 L'Aquila and the 2012 Reggio Emilia episodes. The achievement of the model with daily updated parameters is compared with that of same model where the parameters remain fixed during the test time.

  2. METAHEURISTIC OPTIMIZATION METHODS FOR PARAMETERS ESTIMATION OF DYNAMIC SYSTEMS

    Directory of Open Access Journals (Sweden)

    V. Panteleev Andrei

    2017-01-01

    Full Text Available The article considers the usage of metaheuristic methods of constrained global optimization: “Big Bang - Big Crunch”, “Fireworks Algorithm”, “Grenade Explosion Method” in parameters of dynamic systems estimation, described with algebraic-differential equations. Parameters estimation is based upon the observation results from mathematical model behavior. Their values are derived after criterion minimization, which describes the total squared error of state vector coordinates from the deduced ones with precise values observation at different periods of time. Paral- lelepiped type restriction is imposed on the parameters values. Used for solving problems, metaheuristic methods of constrained global extremum don’t guarantee the result, but allow to get a solution of a rather good quality in accepta- ble amount of time. The algorithm of using metaheuristic methods is given. Alongside with the obvious methods for solving algebraic-differential equation systems, it is convenient to use implicit methods for solving ordinary differen- tial equation systems. Two ways of solving the problem of parameters evaluation are given, those parameters differ in their mathematical model. In the first example, a linear mathematical model describes the chemical action parameters change, and in the second one, a nonlinear mathematical model describes predator-prey dynamics, which characterize the changes in both kinds’ population. For each of the observed examples there are calculation results from all the three methods of optimization, there are also some recommendations for how to choose methods parameters. The obtained numerical results have demonstrated the efficiency of the proposed approach. The deduced parameters ap- proximate points slightly differ from the best known solutions, which were deduced differently. To refine the results one should apply hybrid schemes that combine classical methods of optimization of zero, first and second orders and

  3. An improved method to estimate reflectance parameters for high dynamic range imaging

    Science.gov (United States)

    Li, Shiying; Deguchi, Koichiro; Li, Renfa; Manabe, Yoshitsugu; Chihara, Kunihiro

    2008-01-01

    Two methods are described to accurately estimate diffuse and specular reflectance parameters for colors, gloss intensity and surface roughness, over the dynamic range of the camera used to capture input images. Neither method needs to segment color areas on an image, or to reconstruct a high dynamic range (HDR) image. The second method improves on the first, bypassing the requirement for specific separation of diffuse and specular reflection components. For the latter method, diffuse and specular reflectance parameters are estimated separately, using the least squares method. Reflection values are initially assumed to be diffuse-only reflection components, and are subjected to the least squares method to estimate diffuse reflectance parameters. Specular reflection components, obtained by subtracting the computed diffuse reflection components from reflection values, are then subjected to a logarithmically transformed equation of the Torrance-Sparrow reflection model, and specular reflectance parameters for gloss intensity and surface roughness are finally estimated using the least squares method. Experiments were carried out using both methods, with simulation data at different saturation levels, generated according to the Lambert and Torrance-Sparrow reflection models, and the second method, with spectral images captured by an imaging spectrograph and a moving light source. Our results show that the second method can estimate the diffuse and specular reflectance parameters for colors, gloss intensity and surface roughness more accurately and faster than the first one, so that colors and gloss can be reproduced more efficiently for HDR imaging.

  4. NONLINEAR PLANT PIECEWISE-CONTINUOUS MODEL MATRIX PARAMETERS ESTIMATION

    Directory of Open Access Journals (Sweden)

    Roman L. Leibov

    2017-09-01

    Full Text Available This paper presents a nonlinear plant piecewise-continuous model matrix parameters estimation technique using nonlinear model time responses and random search method. One of piecewise-continuous model application areas is defined. The results of proposed approach application for aircraft turbofan engine piecewisecontinuous model formation are presented

  5. The role of interior watershed processes in improving parameter estimation and performance of watershed models.

    Science.gov (United States)

    Yen, Haw; Bailey, Ryan T; Arabi, Mazdak; Ahmadi, Mehdi; White, Michael J; Arnold, Jeffrey G

    2014-09-01

    Watershed models typically are evaluated solely through comparison of in-stream water and nutrient fluxes with measured data using established performance criteria, whereas processes and responses within the interior of the watershed that govern these global fluxes often are neglected. Due to the large number of parameters at the disposal of these models, circumstances may arise in which excellent global results are achieved using inaccurate magnitudes of these "intra-watershed" responses. When used for scenario analysis, a given model hence may inaccurately predict the global, in-stream effect of implementing land-use practices at the interior of the watershed. In this study, data regarding internal watershed behavior are used to constrain parameter estimation to maintain realistic intra-watershed responses while also matching available in-stream monitoring data. The methodology is demonstrated for the Eagle Creek Watershed in central Indiana. Streamflow and nitrate (NO) loading are used as global in-stream comparisons, with two process responses, the annual mass of denitrification and the ratio of NO losses from subsurface and surface flow, used to constrain parameter estimation. Results show that imposing these constraints not only yields realistic internal watershed behavior but also provides good in-stream comparisons. Results further demonstrate that in the absence of incorporating intra-watershed constraints, evaluation of nutrient abatement strategies could be misleading, even though typical performance criteria are satisfied. Incorporating intra-watershed responses yields a watershed model that more accurately represents the observed behavior of the system and hence a tool that can be used with confidence in scenario evaluation. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  6. Uterotonic use immediately following birth: using a novel methodology to estimate population coverage in four countries.

    Science.gov (United States)

    Ricca, Jim; Dwivedi, Vikas; Varallo, John; Singh, Gajendra; Pallipamula, Suranjeen Prasad; Amade, Nazir; de Luz Vaz, Maria; Bishanga, Dustan; Plotkin, Marya; Al-Makaleh, Bushra; Suhowatsky, Stephanie; Smith, Jeffrey Michael

    2015-01-22

    Postpartum hemorrhage (PPH) is the leading cause of maternal mortality in developing countries. While incidence of PPH can be dramatically reduced by uterotonic use immediately following birth (UUIFB) in both community and facility settings, national coverage estimates are rare. Most national health systems have no indicator to track this, and community-based measurements are even more scarce. To fill this information gap, a methodology for estimating national coverage for UUIFB was developed and piloted in four settings. The rapid estimation methodology consisted of convening a group of national technical experts and using the Delphi method to come to consensus on key data elements that were applied to a simple algorithm, generating a non-precise national estimate of coverage of UUIFB. Data elements needed for the calculation were the distribution of births by location and estimates of UUIFB in each of those settings, adjusted to take account of stockout rates and potency of uterotonics. This exercise was conducted in 2013 in Mozambique, Tanzania, the state of Jharkhand in India, and Yemen. Available data showed that deliveries in public health facilities account for approximately half of births in Mozambique and Tanzania, 16% in Jharkhand and 24% of births in Yemen. Significant proportions of births occur in private facilities in Jharkhand and faith-based facilities in Tanzania. Estimated uterotonic use for facility births ranged from 70 to 100%. Uterotonics are not used routinely for PPH prevention at home births in any of the settings. National UUIFB coverage estimates of all births were 43% in Mozambique, 40% in Tanzania, 44% in Jharkhand, and 14% in Yemen. This methodology for estimating coverage of UUIFB was found to be feasible and acceptable. While the exercise produces imprecise estimates whose validity cannot be assessed objectively in the absence of a gold standard estimate, stakeholders felt they were accurate enough to be actionable. The exercise

  7. A termination criterion for parameter estimation in stochastic models in systems biology.

    Science.gov (United States)

    Zimmer, Christoph; Sahle, Sven

    2015-11-01

    Parameter estimation procedures are a central aspect of modeling approaches in systems biology. They are often computationally expensive, especially when the models take stochasticity into account. Typically parameter estimation involves the iterative optimization of an objective function that describes how well the model fits some measured data with a certain set of parameter values. In order to limit the computational expenses it is therefore important to apply an adequate stopping criterion for the optimization process, so that the optimization continues at least until a reasonable fit is obtained, but not much longer. In the case of stochastic modeling, at least some parameter estimation schemes involve an objective function that is itself a random variable. This means that plain convergence tests are not a priori suitable as stopping criteria. This article suggests a termination criterion suited to optimization problems in parameter estimation arising from stochastic models in systems biology. The termination criterion is developed for optimization algorithms that involve populations of parameter sets, such as particle swarm or evolutionary algorithms. It is based on comparing the variance of the objective function over the whole population of parameter sets with the variance of repeated evaluations of the objective function at the best parameter set. The performance is demonstrated for several different algorithms. To test the termination criterion we choose polynomial test functions as well as systems biology models such as an Immigration-Death model and a bistable genetic toggle switch. The genetic toggle switch is an especially challenging test case as it shows a stochastic switching between two steady states which is qualitatively different from the model behavior in a deterministic model. Copyright © 2015. Published by Elsevier Ireland Ltd.

  8. A framework for scalable parameter estimation of gene circuit models using structural information.

    Science.gov (United States)

    Kuwahara, Hiroyuki; Fan, Ming; Wang, Suojin; Gao, Xin

    2013-07-01

    Systematic and scalable parameter estimation is a key to construct complex gene regulatory models and to ultimately facilitate an integrative systems biology approach to quantitatively understand the molecular mechanisms underpinning gene regulation. Here, we report a novel framework for efficient and scalable parameter estimation that focuses specifically on modeling of gene circuits. Exploiting the structure commonly found in gene circuit models, this framework decomposes a system of coupled rate equations into individual ones and efficiently integrates them separately to reconstruct the mean time evolution of the gene products. The accuracy of the parameter estimates is refined by iteratively increasing the accuracy of numerical integration using the model structure. As a case study, we applied our framework to four gene circuit models with complex dynamics based on three synthetic datasets and one time series microarray data set. We compared our framework to three state-of-the-art parameter estimation methods and found that our approach consistently generated higher quality parameter solutions efficiently. Although many general-purpose parameter estimation methods have been applied for modeling of gene circuits, our results suggest that the use of more tailored approaches to use domain-specific information may be a key to reverse engineering of complex biological systems. http://sfb.kaust.edu.sa/Pages/Software.aspx. Supplementary data are available at Bioinformatics online.

  9. Modified polarimetric bidirectional reflectance distribution function with diffuse scattering: surface parameter estimation

    Science.gov (United States)

    Zhan, Hanyu; Voelz, David G.

    2016-12-01

    The polarimetric bidirectional reflectance distribution function (pBRDF) describes the relationships between incident and scattered Stokes parameters, but the familiar surface-only microfacet pBRDF cannot capture diffuse scattering contributions and depolarization phenomena. We propose a modified pBRDF model with a diffuse scattering component developed from the Kubelka-Munk and Le Hors et al. theories, and apply it in the development of a method to jointly estimate refractive index, slope variance, and diffuse scattering parameters from a series of Stokes parameter measurements of a surface. An application of the model and estimation approach to experimental data published by Priest and Meier shows improved correspondence with measurements of normalized Mueller matrix elements. By converting the Stokes/Mueller calculus formulation of the model to a degree of polarization (DOP) description, the estimation results of the parameters from measured DOP values are found to be consistent with a previous DOP model and results.

  10. Evapotranspiration estimation using a parameter-parsimonious energy partition model over Amazon basin

    Science.gov (United States)

    Xu, D.; Agee, E.; Wang, J.; Ivanov, V. Y.

    2017-12-01

    The increased frequency and severity of droughts in the Amazon region have emphasized the potential vulnerability of the rainforests to heat and drought-induced stresses, highlighting the need to reduce the uncertainty in estimates of regional evapotranspiration (ET) and quantify resilience of the forest. Ground-based observations for estimating ET are resource intensive, making methods based on remotely sensed observations an attractive alternative. Several methodologies have been developed to estimate ET from satellite data, but challenges remained in model parameterization and satellite limited coverage reducing their utility for monitoring biodiverse regions. In this work, we apply a novel surface energy partition method (Maximum Entropy Production; MEP) based on Bayesian probability theory and nonequilibrium thermodynamics to derive ET time series using satellite data for Amazon basin. For a large, sparsely monitored region such as the Amazon, this approach has the advantage methods of only using single level measurements of net radiation, temperature, and specific humidity data. Furthermore, it is not sensitive to the uncertainty of the input data and model parameters. In this first application of MEP theory for a tropical forest biome, we assess its performance at various spatiotemporal scales against a diverse field data sets. Specifically, the objective of this work is to test this method using eddy flux data for several locations across the Amazonia at sub-daily, monthly, and annual scales and compare the new estimates with those using traditional methods. Analyses of the derived ET time series will contribute to reducing the current knowledge gap surrounding the much debated response of the Amazon Basin region to droughts and offer a template for monitoring the long-term changes in global hydrologic cycle due to anthropogenic and natural causes.

  11. Sequential ensemble-based optimal design for parameter estimation: SEQUENTIAL ENSEMBLE-BASED OPTIMAL DESIGN

    Energy Technology Data Exchange (ETDEWEB)

    Man, Jun [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Zhang, Jiangjiang [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Li, Weixuan [Pacific Northwest National Laboratory, Richland Washington USA; Zeng, Lingzao [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Wu, Laosheng [Department of Environmental Sciences, University of California, Riverside California USA

    2016-10-01

    The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees of freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.

  12. Comparison of the COMRADEX-IV and AIRDOS-EPA methodologies for estimating the radiation dose to man from radionuclide releases to the atmosphere

    International Nuclear Information System (INIS)

    Miller, C.W.; Hoffman, F.O.; Dunning, D.E. Jr.

    1981-01-01

    This report presents a comparison between two computerized methodologies for estimating the radiation dose to man from radionuclide releases to the atmosphere. The COMRADEX-IV code was designed to provide a means of assessing potential radiological consequences from postulated power reactor accidents. The AIRDOS-EPA code was developed primarily to assess routine radionuclide releases from nuclear facilities. Although a number of different calculations are performed by these codes, three calculations are in common - atmospheric dispersion, estimation of internal dose from inhalation, and estimation of external dose from immersion in air containing gamma emitting radionuclides. The models used in these calculations were examined and found, in general, to be the same. Most differences in the doses calculated by the two codes are due to differences in values chosen for input parameters and not due to model differences. A sample problem is presented for illustration

  13. Estimating Propensity Parameters Using Google PageRank and Genetic Algorithms.

    Science.gov (United States)

    Murrugarra, David; Miller, Jacob; Mueller, Alex N

    2016-01-01

    Stochastic Boolean networks, or more generally, stochastic discrete networks, are an important class of computational models for molecular interaction networks. The stochasticity stems from the updating schedule. Standard updating schedules include the synchronous update, where all the nodes are updated at the same time, and the asynchronous update where a random node is updated at each time step. The former produces a deterministic dynamics while the latter a stochastic dynamics. A more general stochastic setting considers propensity parameters for updating each node. Stochastic Discrete Dynamical Systems (SDDS) are a modeling framework that considers two propensity parameters for updating each node and uses one when the update has a positive impact on the variable, that is, when the update causes the variable to increase its value, and uses the other when the update has a negative impact, that is, when the update causes it to decrease its value. This framework offers additional features for simulations but also adds a complexity in parameter estimation of the propensities. This paper presents a method for estimating the propensity parameters for SDDS. The method is based on adding noise to the system using the Google PageRank approach to make the system ergodic and thus guaranteeing the existence of a stationary distribution. Then with the use of a genetic algorithm, the propensity parameters are estimated. Approximation techniques that make the search algorithms efficient are also presented and Matlab/Octave code to test the algorithms are available at http://www.ms.uky.edu/~dmu228/GeneticAlg/Code.html.

  14. Estimation of apparent kinetic parameters of polymer pyrolysis with complex thermal degradation behavior

    International Nuclear Information System (INIS)

    Srimachai, Taranee; Anantawaraskul, Siripon

    2010-01-01

    Full text: Thermal degradation behavior during polymer pyrolysis can typically be described using three apparent kinetic parameters (i.e., pre-exponential factor, activation energy, and reaction order). Several efficient techniques have been developed to estimate these apparent kinetic parameters for simple thermal degradation behavior (i.e., single apparent pyrolysis reaction). Unfortunately, these techniques cannot be directly extended to the case of polymer pyrolysis with complex thermal degradation behavior (i.e., multiple concurrent reactions forming single or multiple DTG peaks). In this work, we proposed a deconvolution method to determine the number of apparent reactions and estimate three apparent kinetic parameters and contribution of each reaction for polymer pyrolysis with complex thermal degradation behavior. The proposed technique was validated with the model and experimental pyrolysis data of several polymer blends with known compositions. The results showed that (1) the number of reaction and (2) three apparent kinetic parameters and contribution of each reaction can be estimated reasonably. The simulated DTG curves with estimated parameters also agree well with experimental DTG curves. (author)

  15. Efficiency Optimization Control of IPM Synchronous Motor Drives with Online Parameter Estimation

    Directory of Open Access Journals (Sweden)

    Sadegh Vaez-Zadeh

    2011-04-01

    Full Text Available This paper describes an efficiency optimization control method for high performance interior permanent magnet synchronous motor drives with online estimation of motor parameters. The control system is based on an input-output feedback linearization method which provides high performance control and simultaneously ensures the minimization of the motor losses. The controllable electrical loss can be minimized by the optimal control of the armature current vector. It is shown that parameter variations except at near the nominal conditions have undesirable effect on the controller performance. Therefore, a parameter estimation method based on the second method of Lyapunov is presented which guarantees the stability and convergence of the estimation. The extensive simulation results show the feasibility of the proposed controller and observer and their desirable performances.

  16. Determination of Optimal Parameters for Diffusion Bonding of Semi-Solid Casting Aluminium Alloy by Response Surface Methodology

    Directory of Open Access Journals (Sweden)

    Kaewploy Somsak

    2015-01-01

    Full Text Available Liquid state welding techniques available are prone to gas porosity problems. To avoid this solid state bonding is usually an alternative of preference. Among solid state bonding techniques, diffusion bonding is often employed in aluminium alloy automotive parts welding in order to enhance their mechanical properties. However, there has been no standard procedure nor has there been any definitive criterion for judicious welding parameters setting. It is thus a matter of importance to find the set of optimal parameters for effective diffusion bonding. This work proposes the use of response surface methodology in determining such a set of optimal parameters. Response surface methodology is more efficient in dealing with complex process compared with other techniques available. There are two variations of response surface methodology. The one adopted in this work is the central composite design approach. This is because when the initial upper and lower bounds of the desired parameters are exceeded the central composite design approach is still capable of yielding the optimal values of the parameters that appear to be out of the initially preset range. Results from the experiments show that the pressing pressure and the holding time affect the tensile strength of jointing. The data obtained from the experiment fits well to a quadratic equation with high coefficient of determination (R2 = 94.21%. It is found that the optimal parameters in the process of jointing semi-solid casting aluminium alloy by using diffusion bonding are the pressing pressure of 2.06 MPa and 214 minutes of the holding time in order to achieve the highest tensile strength of 142.65 MPa

  17. From LCAs to simplified models: a generic methodology applied to wind power electricity.

    Science.gov (United States)

    Padey, Pierryves; Girard, Robin; le Boulch, Denis; Blanc, Isabelle

    2013-02-05

    This study presents a generic methodology to produce simplified models able to provide a comprehensive life cycle impact assessment of energy pathways. The methodology relies on the application of global sensitivity analysis to identify key parameters explaining the impact variability of systems over their life cycle. Simplified models are built upon the identification of such key parameters. The methodology is applied to one energy pathway: onshore wind turbines of medium size considering a large sample of possible configurations representative of European conditions. Among several technological, geographical, and methodological parameters, we identified the turbine load factor and the wind turbine lifetime as the most influent parameters. Greenhouse Gas (GHG) performances have been plotted as a function of these key parameters identified. Using these curves, GHG performances of a specific wind turbine can be estimated, thus avoiding the undertaking of an extensive Life Cycle Assessment (LCA). This methodology should be useful for decisions makers, providing them a robust but simple support tool for assessing the environmental performance of energy systems.

  18. Estimating unknown parameters in haemophilia using expert judgement elicitation.

    Science.gov (United States)

    Fischer, K; Lewandowski, D; Janssen, M P

    2013-09-01

    The increasing attention to healthcare costs and treatment efficiency has led to an increasing demand for quantitative data concerning patient and treatment characteristics in haemophilia. However, most of these data are difficult to obtain. The aim of this study was to use expert judgement elicitation (EJE) to estimate currently unavailable key parameters for treatment models in severe haemophilia A. Using a formal expert elicitation procedure, 19 international experts provided information on (i) natural bleeding frequency according to age and onset of bleeding, (ii) treatment of bleeds, (iii) time needed to control bleeding after starting secondary prophylaxis, (iv) dose requirements for secondary prophylaxis according to onset of bleeding, and (v) life-expectancy. For each parameter experts provided their quantitative estimates (median, P10, P90), which were combined using a graphical method. In addition, information was obtained concerning key decision parameters of haemophilia treatment. There was most agreement between experts regarding bleeding frequencies for patients treated on demand with an average onset of joint bleeding (1.7 years): median 12 joint bleeds per year (95% confidence interval 0.9-36) for patients ≤ 18, and 11 (0.8-61) for adult patients. Less agreement was observed concerning estimated effective dose for secondary prophylaxis in adults: median 2000 IU every other day The majority (63%) of experts expected that a single minor joint bleed could cause irreversible damage, and would accept up to three minor joint bleeds or one trauma related joint bleed annually on prophylaxis. Expert judgement elicitation allowed structured capturing of quantitative expert estimates. It generated novel data to be used in computer modelling, clinical care, and trial design. © 2013 John Wiley & Sons Ltd.

  19. Methodology to estimate the cost of the severe accidents risk / maximum benefit

    International Nuclear Information System (INIS)

    Mendoza, G.; Flores, R. M.; Vega, E.

    2016-09-01

    For programs and activities to manage aging effects, any changes to plant operations, inspections, maintenance activities, systems and administrative control procedures during the renewal period should be characterized, designed to manage the effects of aging as required by 10 Cfr Part 54 that could impact the environment. Environmental impacts significantly different from those described in the final environmental statement for the current operating license should be described in detail. When complying with the requirements of a license renewal application, the Severe Accident Mitigation Alternatives (SAMA) analysis is contained in a supplement to the environmental report of the plant that meets the requirements of 10 Cfr Part 51. In this paper, the methodology for estimating the cost of severe accidents risk is established and discussed, which is then used to identify and select the alternatives for severe accident mitigation, which are analyzed to estimate the maximum benefit that an alternative could achieve if this eliminate all risk. Using the regulatory analysis techniques of the US Nuclear Regulatory Commission (NRC) estimates the cost of severe accidents risk. The ultimate goal of implementing the methodology is to identify candidates for SAMA that have the potential to reduce the severe accidents risk and determine if the implementation of each candidate is cost-effective. (Author)

  20. An iterative stochastic ensemble method for parameter estimation of subsurface flow models

    International Nuclear Information System (INIS)

    Elsheikh, Ahmed H.; Wheeler, Mary F.; Hoteit, Ibrahim

    2013-01-01

    Parameter estimation for subsurface flow models is an essential step for maximizing the value of numerical simulations for future prediction and the development of effective control strategies. We propose the iterative stochastic ensemble method (ISEM) as a general method for parameter estimation based on stochastic estimation of gradients using an ensemble of directional derivatives. ISEM eliminates the need for adjoint coding and deals with the numerical simulator as a blackbox. The proposed method employs directional derivatives within a Gauss–Newton iteration. The update equation in ISEM resembles the update step in ensemble Kalman filter, however the inverse of the output covariance matrix in ISEM is regularized using standard truncated singular value decomposition or Tikhonov regularization. We also investigate the performance of a set of shrinkage based covariance estimators within ISEM. The proposed method is successfully applied on several nonlinear parameter estimation problems for subsurface flow models. The efficiency of the proposed algorithm is demonstrated by the small size of utilized ensembles and in terms of error convergence rates

  1. An iterative stochastic ensemble method for parameter estimation of subsurface flow models

    KAUST Repository

    Elsheikh, Ahmed H.

    2013-06-01

    Parameter estimation for subsurface flow models is an essential step for maximizing the value of numerical simulations for future prediction and the development of effective control strategies. We propose the iterative stochastic ensemble method (ISEM) as a general method for parameter estimation based on stochastic estimation of gradients using an ensemble of directional derivatives. ISEM eliminates the need for adjoint coding and deals with the numerical simulator as a blackbox. The proposed method employs directional derivatives within a Gauss-Newton iteration. The update equation in ISEM resembles the update step in ensemble Kalman filter, however the inverse of the output covariance matrix in ISEM is regularized using standard truncated singular value decomposition or Tikhonov regularization. We also investigate the performance of a set of shrinkage based covariance estimators within ISEM. The proposed method is successfully applied on several nonlinear parameter estimation problems for subsurface flow models. The efficiency of the proposed algorithm is demonstrated by the small size of utilized ensembles and in terms of error convergence rates. © 2013 Elsevier Inc.

  2. Parameter and State Estimation of Large-Scale Complex Systems Using Python Tools

    Directory of Open Access Journals (Sweden)

    M. Anushka S. Perera

    2015-07-01

    Full Text Available This paper discusses the topics related to automating parameter, disturbance and state estimation analysis of large-scale complex nonlinear dynamic systems using free programming tools. For large-scale complex systems, before implementing any state estimator, the system should be analyzed for structural observability and the structural observability analysis can be automated using Modelica and Python. As a result of structural observability analysis, the system may be decomposed into subsystems where some of them may be observable --- with respect to parameter, disturbances, and states --- while some may not. The state estimation process is carried out for those observable subsystems and the optimum number of additional measurements are prescribed for unobservable subsystems to make them observable. In this paper, an industrial case study is considered: the copper production process at Glencore Nikkelverk, Kristiansand, Norway. The copper production process is a large-scale complex system. It is shown how to implement various state estimators, in Python, to estimate parameters and disturbances, in addition to states, based on available measurements.

  3. Associated with aerospace vehicles development of methodologies for the estimation of thermal properties

    Science.gov (United States)

    Scott, Elaine P.

    1994-01-01

    Thermal stress analyses are an important aspect in the development of aerospace vehicles at NASA-LaRC. These analyses require knowledge of the temperature distributions within the vehicle structures which consequently necessitates the need for accurate thermal property data. The overall goal of this ongoing research effort is to develop methodologies for the estimation of the thermal property data needed to describe the temperature responses of these complex structures. The research strategy undertaken utilizes a building block approach. The idea here is to first focus on the development of property estimation methodologies for relatively simple conditions, such as isotropic materials at constant temperatures, and then systematically modify the technique for the analysis of more and more complex systems, such as anisotropic multi-component systems. The estimation methodology utilized is a statistically based method which incorporates experimental data and a mathematical model of the system. Several aspects of this overall research effort were investigated during the time of the ASEE summer program. One important aspect involved the calibration of the estimation procedure for the estimation of the thermal properties through the thickness of a standard material. Transient experiments were conducted using a Pyrex standard at various temperatures, and then the thermal properties (thermal conductivity and volumetric heat capacity) were estimated at each temperature. Confidence regions for the estimated values were also determined. These results were then compared to documented values. Another set of experimental tests were conducted on carbon composite samples at different temperatures. Again, the thermal properties were estimated for each temperature, and the results were compared with values obtained using another technique. In both sets of experiments, a 10-15 percent off-set between the estimated values and the previously determined values was found. Another effort

  4. Reporting of various methodological and statistical parameters in negative studies published in prominent Indian Medical Journals: a systematic review.

    Science.gov (United States)

    Charan, J; Saxena, D

    2014-01-01

    Biased negative studies not only reflect poor research effort but also have an impact on 'patient care' as they prevent further research with similar objectives, leading to potential research areas remaining unexplored. Hence, published 'negative studies' should be methodologically strong. All parameters that may help a reader to judge validity of results and conclusions should be reported in published negative studies. There is a paucity of data on reporting of statistical and methodological parameters in negative studies published in Indian Medical Journals. The present systematic review was designed with an aim to critically evaluate negative studies published in prominent Indian Medical Journals for reporting of statistical and methodological parameters. Systematic review. All negative studies published in 15 Science Citation Indexed (SCI) medical journals published from India were included in present study. Investigators involved in the study evaluated all negative studies for the reporting of various parameters. Primary endpoints were reporting of "power" and "confidence interval." Power was reported in 11.8% studies. Confidence interval was reported in 15.7% studies. Majority of parameters like sample size calculation (13.2%), type of sampling method (50.8%), name of statistical tests (49.1%), adjustment of multiple endpoints (1%), post hoc power calculation (2.1%) were reported poorly. Frequency of reporting was more in clinical trials as compared to other study designs and in journals having impact factor more than 1 as compared to journals having impact factor less than 1. Negative studies published in prominent Indian medical journals do not report statistical and methodological parameters adequately and this may create problems in the critical appraisal of findings reported in these journals by its readers.

  5. Methodological Challenges in Estimating Trends and Burden of Cardiovascular Disease in Sub-Saharan Africa

    Directory of Open Access Journals (Sweden)

    Jacob K. Kariuki

    2015-01-01

    Full Text Available Background. Although 80% of the burden of cardiovascular disease (CVD is in developing countries, the 2010 global burden of disease (GBD estimates have been cited to support a premise that sub-Saharan Africa (SSA is exempt from the CVD epidemic sweeping across developing countries. The widely publicized perspective influences research priorities and resource allocation at a time when secular trends indicate a rapid increase in prevalence of CVD in SSA by 2030. Purpose. To explore methodological challenges in estimating trends and burden of CVD in SSA via appraisal of the current CVD statistics and literature. Methods. This review was guided by the Critical review methodology described by Grant and Booth. The review traces the origins and evolution of GBD metrics and then explores the methodological limitations inherent in the current GBD statistics. Articles were included based on their conceptual contribution to the existing body of knowledge on the burden of CVD in SSA. Results/Conclusion. Cognizant of the methodological challenges discussed, we caution against extrapolation of the global burden of CVD statistics in a way that underrates the actual but uncertain impact of CVD in SSA. We conclude by making a case for optimal but cost-effective surveillance and prevention of CVD in SSA.

  6. On Drift Parameter Estimation in Models with Fractional Brownian Motion by Discrete Observations

    Directory of Open Access Journals (Sweden)

    Yuliya Mishura

    2014-06-01

    Full Text Available We study a problem of an unknown drift parameter estimation in a stochastic differen- tial equation driven by fractional Brownian motion. We represent the likelihood ratio as a function of the observable process. The form of this representation is in general rather complicated. However, in the simplest case it can be simplified and we can discretize it to establish the a. s. convergence of the discretized version of maximum likelihood estimator to the true value of parameter. We also investigate a non-standard estimator of the drift parameter showing further its strong consistency. 

  7. A study of calculation methodology and experimental measurements of the kinetic parameters for source driven subcritical systems

    International Nuclear Information System (INIS)

    Lee, Seung Min

    2009-01-01

    This work presents a theoretical study of reactor kinetics focusing on the methodology of calculation and the experimental measurements of the so-called kinetic parameters. A comparison between the methodology based on the Dulla's formalism and the classical method is made. The objective is to exhibit the dependence of the parameters on subcriticality level and perturbation. Two different slab type systems were considered: thermal one and fast one, both with homogeneous media. One group diffusion model was used for the fast reactor, and for the thermal system, two groups diffusion model, considering, in both case, only one precursor's family. The solutions were obtained using the expansion method. Also, descriptions of the main experimental methods of measurements of the kinetic parameters are presented in order to put a question about the compatibility of these methods in subcritical region. (author)

  8. Analytic Investigation Into Effect of Population Heterogeneity on Parameter Ratio Estimates

    International Nuclear Information System (INIS)

    Schinkel, Colleen; Carlone, Marco; Warkentin, Brad; Fallone, B. Gino

    2007-01-01

    Purpose: A homogeneous tumor control probability (TCP) model has previously been used to estimate the α/β ratio for prostate cancer from clinical dose-response data. For the ratio to be meaningful, it must be assumed that parameter ratios are not sensitive to the type of tumor control model used. We investigated the validity of this assumption by deriving analytic relationships between the α/β estimates from a homogeneous TCP model, ignoring interpatient heterogeneity, and those of the corresponding heterogeneous (population-averaged) model that incorporated heterogeneity. Methods and Materials: The homogeneous and heterogeneous TCP models can both be written in terms of the geometric parameters D 50 and γ 50 . We show that the functional forms of these models are similar. This similarity was used to develop an expression relating the homogeneous and heterogeneous estimates for the α/β ratio. The expression was verified numerically by generating pseudo-data from a TCP curve with known parameters and then using the homogeneous and heterogeneous TCP models to estimate the α/β ratio for the pseudo-data. Results: When the dominant form of interpatient heterogeneity is that of radiosensitivity, the homogeneous and heterogeneous α/β estimates differ. This indicates that the presence of this heterogeneity affects the value of the α/β ratio derived from analysis of TCP curves. Conclusions: The α/β ratio estimated from clinical dose-response data is model dependent-a heterogeneous TCP model that accounts for heterogeneity in radiosensitivity will produce a greater α/β estimate than that resulting from a homogeneous TCP model

  9. Applicability of genetic algorithms to parameter estimation of economic models

    Directory of Open Access Journals (Sweden)

    Marcel Ševela

    2004-01-01

    Full Text Available The paper concentrates on capability of genetic algorithms for parameter estimation of non-linear economic models. In the paper we test the ability of genetic algorithms to estimate of parameters of demand function for durable goods and simultaneously search for parameters of genetic algorithm that lead to maximum effectiveness of the computation algorithm. The genetic algorithms connect deterministic iterative computation methods with stochastic methods. In the genteic aůgorithm approach each possible solution is represented by one individual, those life and lifes of all generations of individuals run under a few parameter of genetic algorithm. Our simulations resulted in optimal mutation rate of 15% of all bits in chromosomes, optimal elitism rate 20%. We can not set the optimal extend of generation, because it proves positive correlation with effectiveness of genetic algorithm in all range under research, but its impact is degreasing. The used genetic algorithm was sensitive to mutation rate at most, than to extend of generation. The sensitivity to elitism rate is not so strong.

  10. Parameter Estimation as a Problem in Statistical Thermodynamics.

    Science.gov (United States)

    Earle, Keith A; Schneider, David J

    2011-03-14

    In this work, we explore the connections between parameter fitting and statistical thermodynamics using the maxent principle of Jaynes as a starting point. In particular, we show how signal averaging may be described by a suitable one particle partition function, modified for the case of a variable number of particles. These modifications lead to an entropy that is extensive in the number of measurements in the average. Systematic error may be interpreted as a departure from ideal gas behavior. In addition, we show how to combine measurements from different experiments in an unbiased way in order to maximize the entropy of simultaneous parameter fitting. We suggest that fit parameters may be interpreted as generalized coordinates and the forces conjugate to them may be derived from the system partition function. From this perspective, the parameter fitting problem may be interpreted as a process where the system (spectrum) does work against internal stresses (non-optimum model parameters) to achieve a state of minimum free energy/maximum entropy. Finally, we show how the distribution function allows us to define a geometry on parameter space, building on previous work[1, 2]. This geometry has implications for error estimation and we outline a program for incorporating these geometrical insights into an automated parameter fitting algorithm.

  11. Non-Cooperative Target Imaging and Parameter Estimation with Narrowband Radar Echoes

    Directory of Open Access Journals (Sweden)

    Chun-mao Yeh

    2016-01-01

    Full Text Available This study focuses on the rotating target imaging and parameter estimation with narrowband radar echoes, which is essential for radar target recognition. First, a two-dimensional (2D imaging model with narrowband echoes is established in this paper, and two images of the target are formed on the velocity-acceleration plane at two neighboring coherent processing intervals (CPIs. Then, the rotating velocity (RV is proposed to be estimated by utilizing the relationship between the positions of the scattering centers among two images. Finally, the target image is rescaled to the range-cross-range plane with the estimated rotational parameter. The validity of the proposed approach is confirmed using numerical simulations.

  12. Parameter Estimation of a Closed Loop Coupled Tank Time Varying System using Recursive Methods

    International Nuclear Information System (INIS)

    Basir, Siti Nora; Yussof, Hanafiah; Shamsuddin, Syamimi; Selamat, Hazlina; Zahari, Nur Ismarrubie

    2013-01-01

    This project investigates the direct identification of closed loop plant using discrete-time approach. The uses of Recursive Least Squares (RLS), Recursive Instrumental Variable (RIV) and Recursive Instrumental Variable with Centre-Of-Triangle (RIV + COT) in the parameter estimation of closed loop time varying system have been considered. The algorithms were applied in a coupled tank system that employs covariance resetting technique where the time of parameter changes occur is unknown. The performances of all the parameter estimation methods, RLS, RIV and RIV + COT were compared. The estimation of the system whose output was corrupted with white and coloured noises were investigated. Covariance resetting technique successfully executed when the parameters change. RIV + COT gives better estimates than RLS and RIV in terms of convergence and maximum overshoot

  13. When celibacy matters: incorporating non-breeders improves demographic parameter estimates.

    Science.gov (United States)

    Pardo, Deborah; Weimerskirch, Henri; Barbraud, Christophe

    2013-01-01

    In long-lived species only a fraction of a population breeds at a given time. Non-breeders can represent more than half of adult individuals, calling in doubt the relevance of estimating demographic parameters from the sole breeders. Here we demonstrate the importance of considering observable non-breeders to estimate reliable demographic traits: survival, return, breeding, hatching and fledging probabilities. We study the long-lived quasi-biennial breeding wandering albatross (Diomedea exulans). In this species, the breeding cycle lasts almost a year and birds that succeed a given year tend to skip the next breeding occasion while birds that fail tend to breed again the following year. Most non-breeders remain unobservable at sea, but still a substantial number of observable non-breeders (ONB) was identified on breeding sites. Using multi-state capture-mark-recapture analyses, we used several measures to compare the performance of demographic estimates between models incorporating or ignoring ONB: bias (difference in mean), precision (difference is standard deviation) and accuracy (both differences in mean and standard deviation). Our results highlight that ignoring ONB leads to bias and loss of accuracy on breeding probability and survival estimates. These effects are even stronger when studied in an age-dependent framework. Biases on breeding probabilities and survival increased with age leading to overestimation of survival at old age and thus actuarial senescence and underestimation of reproductive senescence. We believe our study sheds new light on the difficulties of estimating demographic parameters in species/taxa where a significant part of the population does not breed every year. Taking into account ONB appeared important to improve demographic parameter estimates, models of population dynamics and evolutionary conclusions regarding senescence within and across taxa.

  14. Methodology and Applications in Non-linear Model-based Geostatistics

    DEFF Research Database (Denmark)

    Christensen, Ole Fredslund

    that are approximately Gaussian. Parameter estimation and prediction for the transformed Gaussian model is studied. In some cases a transformation cannot possibly render the data Gaussian. A methodology for analysing such data was introduced by Diggle, Tawn and Moyeed (1998): The generalised linear spatial model...... priors for Bayesian inference is discussed. Procedures for parameter estimation and prediction are studied. Theoretical properties of Markov chain Monte Carlo algorithms are investigated, and different algorithms are compared. In addition, the thesis contains a manual for an R-package, geoRglmm, which...

  15. Estimation of genetic parameters for body weights of Kurdish sheep ...

    African Journals Online (AJOL)

    Genetic parameters and (co)variance components were estimated by restricted maximum likelihood (REML) procedure, using animal models of kind 1, 2, 3, 4, 5 and 6, for body weight in birth, three, six, nine and 12 months of age in a Kurdish sheep flock. Direct and maternal breeding values were estimated using the best ...

  16. SBML-PET-MPI: a parallel parameter estimation tool for Systems Biology Markup Language based models.

    Science.gov (United States)

    Zi, Zhike

    2011-04-01

    Parameter estimation is crucial for the modeling and dynamic analysis of biological systems. However, implementing parameter estimation is time consuming and computationally demanding. Here, we introduced a parallel parameter estimation tool for Systems Biology Markup Language (SBML)-based models (SBML-PET-MPI). SBML-PET-MPI allows the user to perform parameter estimation and parameter uncertainty analysis by collectively fitting multiple experimental datasets. The tool is developed and parallelized using the message passing interface (MPI) protocol, which provides good scalability with the number of processors. SBML-PET-MPI is freely available for non-commercial use at http://www.bioss.uni-freiburg.de/cms/sbml-pet-mpi.html or http://sites.google.com/site/sbmlpetmpi/.

  17. Sinusoidal Parameter Estimation Using Quadratic Interpolation around Power-Scaled Magnitude Spectrum Peaks

    Directory of Open Access Journals (Sweden)

    Kurt James Werner

    2016-10-01

    Full Text Available The magnitude of the Discrete Fourier Transform (DFT of a discrete-time signal has a limited frequency definition. Quadratic interpolation over the three DFT samples surrounding magnitude peaks improves the estimation of parameters (frequency and amplitude of resolved sinusoids beyond that limit. Interpolating on a rescaled magnitude spectrum using a logarithmic scale has been shown to improve those estimates. In this article, we show how to heuristically tune a power scaling parameter to outperform linear and logarithmic scaling at an equivalent computational cost. Although this power scaling factor is computed heuristically rather than analytically, it is shown to depend in a structured way on window parameters. Invariance properties of this family of estimators are studied and the existence of a bias due to noise is shown. Comparing to two state-of-the-art estimators, we show that an optimized power scaling has a lower systematic bias and lower mean-squared-error in noisy conditions for ten out of twelve common windowing functions.

  18. REML estimates of genetic parameters of sexual dimorphism for ...

    Indian Academy of Sciences (India)

    Administrator

    Full and half sibs were distinguished, in contrast to usual isofemale studies in which animals ... studies. Thus, the aim of this study was to estimate genetic parameters of sexual dimorphism in isofemale lines using ..... Muscovy ducks. Genet.

  19. A parameter estimation for DC servo motor by using optimization process

    International Nuclear Information System (INIS)

    Arjoni Amir

    2010-01-01

    Modeling and simulation parameters of DC servo motor using Matlab Simulink software have been done. The objective to define the DC servo motor parameter estimation is to get DC servo motor parameter values (B, La, Ra, Km, J) which are significant value that can be used for actuation process of control systems. In the analysis of control systems DC the servo motor expressed by transfer function equation to make faster to be analyzed as a component of the actuator. To obtain the data model parameters and initial conditions of the DC servo motors is then carried out the processor modeling and simulation in which the DC servo motor combined with other components. To obtain preliminary data of the DC servo motor parameters as estimated venue, it is obtained from the data factory of the DC servo motor. The initial data parameters of the DC servo motor are applied for the optimization process by using nonlinear least square algorithm and minimize the cost function value so that the DC servo motors parameter values are obtained significantly. The result of the optimization process of the DC servo motor parameter values are B = 0.039881, J= 1.2608e-007, Km = 0.069648, La = 2.3242e-006 and Ra = 1.8837. (author)

  20. Estimating parameters for probabilistic linkage of privacy-preserved datasets.

    Science.gov (United States)

    Brown, Adrian P; Randall, Sean M; Ferrante, Anna M; Semmens, James B; Boyd, James H

    2017-07-10

    Probabilistic record linkage is a process used to bring together person-based records from within the same dataset (de-duplication) or from disparate datasets using pairwise comparisons and matching probabilities. The linkage strategy and associated match probabilities are often estimated through investigations into data quality and manual inspection. However, as privacy-preserved datasets comprise encrypted data, such methods are not possible. In this paper, we present a method for estimating the probabilities and threshold values for probabilistic privacy-preserved record linkage using Bloom filters. Our method was tested through a simulation study using synthetic data, followed by an application using real-world administrative data. Synthetic datasets were generated with error rates from zero to 20% error. Our method was used to estimate parameters (probabilities and thresholds) for de-duplication linkages. Linkage quality was determined by F-measure. Each dataset was privacy-preserved using separate Bloom filters for each field. Match probabilities were estimated using the expectation-maximisation (EM) algorithm on the privacy-preserved data. Threshold cut-off values were determined by an extension to the EM algorithm allowing linkage quality to be estimated for each possible threshold. De-duplication linkages of each privacy-preserved dataset were performed using both estimated and calculated probabilities. Linkage quality using the F-measure at the estimated threshold values was also compared to the highest F-measure. Three large administrative datasets were used to demonstrate the applicability of the probability and threshold estimation technique on real-world data. Linkage of the synthetic datasets using the estimated probabilities produced an F-measure that was comparable to the F-measure using calculated probabilities, even with up to 20% error. Linkage of the administrative datasets using estimated probabilities produced an F-measure that was higher

  1. Forest parameter estimation using polarimetric SAR interferometry techniques at low frequencies

    International Nuclear Information System (INIS)

    Lee, Seung-Kuk

    2013-01-01

    Polarimetric Synthetic Aperture Radar Interferometry (Pol-InSAR) is an active radar remote sensing technique based on the coherent combination of both polarimetric and interferometric observables. The Pol-InSAR technique provided a step forward in quantitative forest parameter estimation. In the last decade, airborne SAR experiments evaluated the potential of Pol-InSAR techniques to estimate forest parameters (e.g., the forest height and biomass) with high accuracy over various local forest test sites. This dissertation addresses the actual status, potentials and limitations of Pol-InSAR inversion techniques for 3-D forest parameter estimations on a global scale using lower frequencies such as L- and P-band. The multi-baseline Pol-InSAR inversion technique is applied to optimize the performance with respect to the actual level of the vertical wave number and to mitigate the impact of temporal decorrelation on the Pol-InSAR forest parameter inversion. Temporal decorrelation is a critical issue for successful Pol-InSAR inversion in the case of repeat-pass Pol-InSAR data, as provided by conventional satellites or airborne SAR systems. Despite the limiting impact of temporal decorrelation in Pol-InSAR inversion, it remains a poorly understood factor in forest height inversion. Therefore, the main goal of this dissertation is to provide a quantitative estimation of the temporal decorrelation effects by using multi-baseline Pol-InSAR data. A new approach to quantify the different temporal decorrelation components is proposed and discussed. Temporal decorrelation coefficients are estimated for temporal baselines ranging from 10 minutes to 54 days and are converted to height inversion errors. In addition, the potential of Pol-InSAR forest parameter estimation techniques is addressed and projected onto future spaceborne system configurations and mission scenarios (Tandem-L and BIOMASS satellite missions at L- and P-band). The impact of the system parameters (e.g., bandwidth

  2. A new method to estimate parameters of linear compartmental models using artificial neural networks

    International Nuclear Information System (INIS)

    Gambhir, Sanjiv S.; Keppenne, Christian L.; Phelps, Michael E.; Banerjee, Pranab K.

    1998-01-01

    At present, the preferred tool for parameter estimation in compartmental analysis is an iterative procedure; weighted nonlinear regression. For a large number of applications, observed data can be fitted to sums of exponentials whose parameters are directly related to the rate constants/coefficients of the compartmental models. Since weighted nonlinear regression often has to be repeated for many different data sets, the process of fitting data from compartmental systems can be very time consuming. Furthermore the minimization routine often converges to a local (as opposed to global) minimum. In this paper, we examine the possibility of using artificial neural networks instead of weighted nonlinear regression in order to estimate model parameters. We train simple feed-forward neural networks to produce as outputs the parameter values of a given model when kinetic data are fed to the networks' input layer. The artificial neural networks produce unbiased estimates and are orders of magnitude faster than regression algorithms. At noise levels typical of many real applications, the neural networks are found to produce lower variance estimates than weighted nonlinear regression in the estimation of parameters from mono- and biexponential models. These results are primarily due to the inability of weighted nonlinear regression to converge. These results establish that artificial neural networks are powerful tools for estimating parameters for simple compartmental models. (author)

  3. Mammalian Cell Culture Process for Monoclonal Antibody Production: Nonlinear Modelling and Parameter Estimation

    Directory of Open Access Journals (Sweden)

    Dan Selişteanu

    2015-01-01

    Full Text Available Monoclonal antibodies (mAbs are at present one of the fastest growing products of pharmaceutical industry, with widespread applications in biochemistry, biology, and medicine. The operation of mAbs production processes is predominantly based on empirical knowledge, the improvements being achieved by using trial-and-error experiments and precedent practices. The nonlinearity of these processes and the absence of suitable instrumentation require an enhanced modelling effort and modern kinetic parameter estimation strategies. The present work is dedicated to nonlinear dynamic modelling and parameter estimation for a mammalian cell culture process used for mAb production. By using a dynamical model of such kind of processes, an optimization-based technique for estimation of kinetic parameters in the model of mammalian cell culture process is developed. The estimation is achieved as a result of minimizing an error function by a particle swarm optimization (PSO algorithm. The proposed estimation approach is analyzed in this work by using a particular model of mammalian cell culture, as a case study, but is generic for this class of bioprocesses. The presented case study shows that the proposed parameter estimation technique provides a more accurate simulation of the experimentally observed process behaviour than reported in previous studies.

  4. Parameter estimation techniques for LTP system identification

    Science.gov (United States)

    Nofrarias Serra, Miquel

    LISA Pathfinder (LPF) is the precursor mission of LISA (Laser Interferometer Space Antenna) and the first step towards gravitational waves detection in space. The main instrument onboard the mission is the LTP (LISA Technology Package) whose scientific goal is to test LISA's drag-free control loop by reaching a differential acceleration noise level between two masses in √ geodesic motion of 3 × 10-14 ms-2 / Hz in the milliHertz band. The mission is not only challenging in terms of technology readiness but also in terms of data analysis. As with any gravitational wave detector, attaining the instrument performance goals will require an extensive noise hunting campaign to measure all contributions with high accuracy. But, opposite to on-ground experiments, LTP characterisation will be only possible by setting parameters via telecommands and getting a selected amount of information through the available telemetry downlink. These two conditions, high accuracy and high reliability, are the main restrictions that the LTP data analysis must overcome. A dedicated object oriented Matlab Toolbox (LTPDA) has been set up by the LTP analysis team for this purpose. Among the different toolbox methods, an essential part for the mission are the parameter estimation tools that will be used for system identification during operations: Linear Least Squares, Non-linear Least Squares and Monte Carlo Markov Chain methods have been implemented as LTPDA methods. The data analysis team has been testing those methods with a series of mock data exercises with the following objectives: to cross-check parameter estimation methods and compare the achievable accuracy for each of them, and to develop the best strategies to describe the physics underlying a complex controlled experiment as the LTP. In this contribution we describe how these methods were tested with simulated LTP-like data to recover the parameters of the model and we report on the latest results of these mock data exercises.

  5. Application of Novel Lateral Tire Force Sensors to Vehicle Parameter Estimation of Electric Vehicles.

    Science.gov (United States)

    Nam, Kanghyun

    2015-11-11

    This article presents methods for estimating lateral vehicle velocity and tire cornering stiffness, which are key parameters in vehicle dynamics control, using lateral tire force measurements. Lateral tire forces acting on each tire are directly measured by load-sensing hub bearings that were invented and further developed by NSK Ltd. For estimating the lateral vehicle velocity, tire force models considering lateral load transfer effects are used, and a recursive least square algorithm is adapted to identify the lateral vehicle velocity as an unknown parameter. Using the estimated lateral vehicle velocity, tire cornering stiffness, which is an important tire parameter dominating the vehicle's cornering responses, is estimated. For the practical implementation, the cornering stiffness estimation algorithm based on a simple bicycle model is developed and discussed. Finally, proposed estimation algorithms were evaluated using experimental test data.

  6. Parameter estimation in a simple stochastic differential equation for phytoplankton modelling

    DEFF Research Database (Denmark)

    Møller, Jan Kloppenborg; Madsen, Henrik; Carstensen, Jacob

    2011-01-01

    The use of stochastic differential equations (SDEs) for simulation of aquatic ecosystems has attracted increasing attention in recent years. The SDE setting also provides the opportunity for statistical estimation of ecosystem parameters. We present an estimation procedure, based on Kalman...

  7. Estimating Propensity Parameters using Google PageRank and Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    David Murrugarra

    2016-11-01

    Full Text Available Stochastic Boolean networks, or more generally, stochastic discrete networks, are an important class of computational models for molecular interaction networks. The stochasticity stems from the updating schedule. Standard updating schedules include the synchronous update, where all the nodes are updated at the same time, and the asynchronous update where a random node is updated at each time step. The former produces a deterministic dynamics while the latter a stochastic dynamics. A more general stochastic setting considers propensity parameters for updating each node. Stochastic Discrete Dynamical Systems (SDDS are a modeling framework that considers two propensity parameters for updating each node and uses one when the update has a positive impact on the variable, that is, when the update causes the variable to increase its value, and uses the other when the update has a negative impact, that is, when the update causes it to decrease its value. This framework offers additional features for simulations but also adds a complexity in parameter estimation of the propensities. This paper presents a method for estimating the propensity parameters for SDDS. The method is based on adding noise to the system using the Google PageRank approach to make the system ergodic and thus guaranteeing the existence of a stationary distribution. Then with the use of a genetic algorithm, the propensity parameters are estimated. Approximation techniques that make the search algorithms efficient are also presented and Matlab/Octave code to test the algorithms are available at~href{http://www.ms.uky.edu/~dmu228/GeneticAlg/Code.html}{http://www.ms.uky.edu/$sim$dmu228GeneticAlgCode.html}.

  8. Estimation of solid earth tidal parameters and FCN with VLBI

    International Nuclear Information System (INIS)

    Krásná, H.

    2012-01-01

    Measurements of a space-geodetic technique VLBI (Very Long Baseline Interferometry) are influenced by a variety of processes which have to be modelled and put as a priori information into the analysis of the space-geodetic data. The increasing accuracy of the VLBI measurements allows access to these parameters and provides possibilities to validate them directly from the measured data. The gravitational attraction of the Moon and the Sun causes deformation of the Earth's surface which can reach several decimetres in radial direction during a day. The displacement is a function of the so-called Love and Shida numbers. Due to the present accuracy of the VLBI measurements the parameters have to be specified as complex numbers, where the imaginary parts describe the anelasticity of the Earth's mantle. Moreover, it is necessary to distinguish between the single tides within the various frequency bands. In this thesis, complex Love and Shida numbers of twelve diurnal and five long-period tides included in the solid Earth tidal displacement modelling are estimated directly from the 27 years of VLBI measurements (1984.0 - 2011.0). In this work, the period of the Free Core Nutation (FCN) is estimated which shows up in the frequency dependent solid Earth tidal displacement as well as in a nutation model describing the motion of the Earth's axis in space. The FCN period in both models is treated as a single parameter and it is estimated in a rigorous global adjustment of the VLBI data. The obtained value of -431.18 ± 0.10 sidereal days differs slightly from the conventional value -431.39 sidereal days given in IERS Conventions 2010. An empirical FCN model based on variable amplitude and phase is determined, whose parameters are estimated in yearly steps directly within VLBI global solutions. (author) [de

  9. Insight into model mechanisms through automatic parameter fitting: a new methodological framework for model development.

    Science.gov (United States)

    Tøndel, Kristin; Niederer, Steven A; Land, Sander; Smith, Nicolas P

    2014-05-20

    Striking a balance between the degree of model complexity and parameter identifiability, while still producing biologically feasible simulations using modelling is a major challenge in computational biology. While these two elements of model development are closely coupled, parameter fitting from measured data and analysis of model mechanisms have traditionally been performed separately and sequentially. This process produces potential mismatches between model and data complexities that can compromise the ability of computational frameworks to reveal mechanistic insights or predict new behaviour. In this study we address this issue by presenting a generic framework for combined model parameterisation, comparison of model alternatives and analysis of model mechanisms. The presented methodology is based on a combination of multivariate metamodelling (statistical approximation of the input-output relationships of deterministic models) and a systematic zooming into biologically feasible regions of the parameter space by iterative generation of new experimental designs and look-up of simulations in the proximity of the measured data. The parameter fitting pipeline includes an implicit sensitivity analysis and analysis of parameter identifiability, making it suitable for testing hypotheses for model reduction. Using this approach, under-constrained model parameters, as well as the coupling between parameters within the model are identified. The methodology is demonstrated by refitting the parameters of a published model of cardiac cellular mechanics using a combination of measured data and synthetic data from an alternative model of the same system. Using this approach, reduced models with simplified expressions for the tropomyosin/crossbridge kinetics were found by identification of model components that can be omitted without affecting the fit to the parameterising data. Our analysis revealed that model parameters could be constrained to a standard deviation of on

  10. ESTIMATION OF DISTANCES TO STARS WITH STELLAR PARAMETERS FROM LAMOST

    Energy Technology Data Exchange (ETDEWEB)

    Carlin, Jeffrey L.; Newberg, Heidi Jo [Department of Physics, Applied Physics and Astronomy, Rensselaer Polytechnic Institute, Troy, NY 12180 (United States); Liu, Chao; Deng, Licai; Li, Guangwei; Luo, A-Li; Wu, Yue; Yang, Ming; Zhang, Haotong [Key Lab of Optical Astronomy, National Astronomical Observatories, Chinese Academy of Sciences, Beijing 100012 (China); Beers, Timothy C. [Department of Physics and JINA: Joint Institute for Nuclear Astrophysics, University of Notre Dame, 225 Nieuwland Science Hall, Notre Dame, IN 46556 (United States); Chen, Li; Hou, Jinliang; Smith, Martin C. [Shanghai Astronomical Observatory, 80 Nandan Road, Shanghai 200030 (China); Guhathakurta, Puragra [UCO/Lick Observatory, Department of Astronomy and Astrophysics, University of California, Santa Cruz, CA 95064 (United States); Hou, Yonghui [Nanjing Institute of Astronomical Optics and Technology, National Astronomical Observatories, Chinese Academy of Sciences, Nanjing 210042 (China); Lépine, Sébastien [Department of Physics and Astronomy, Georgia State University, 25 Park Place, Suite 605, Atlanta, GA 30303 (United States); Yanny, Brian [Fermi National Accelerator Laboratory, P.O. Box 500, Batavia, IL 60510 (United States); Zheng, Zheng, E-mail: jeffreylcarlin@gmail.com [Department of Physics and Astronomy, University of Utah, UT 84112 (United States)

    2015-07-15

    We present a method to estimate distances to stars with spectroscopically derived stellar parameters. The technique is a Bayesian approach with likelihood estimated via comparison of measured parameters to a grid of stellar isochrones, and returns a posterior probability density function for each star’s absolute magnitude. This technique is tailored specifically to data from the Large Sky Area Multi-object Fiber Spectroscopic Telescope (LAMOST) survey. Because LAMOST obtains roughly 3000 stellar spectra simultaneously within each ∼5° diameter “plate” that is observed, we can use the stellar parameters of the observed stars to account for the stellar luminosity function and target selection effects. This removes biasing assumptions about the underlying populations, both due to predictions of the luminosity function from stellar evolution modeling, and from Galactic models of stellar populations along each line of sight. Using calibration data of stars with known distances and stellar parameters, we show that our method recovers distances for most stars within ∼20%, but with some systematic overestimation of distances to halo giants. We apply our code to the LAMOST database, and show that the current precision of LAMOST stellar parameters permits measurements of distances with ∼40% error bars. This precision should improve as the LAMOST data pipelines continue to be refined.

  11. Patient-specific parameter estimation in single-ventricle lumped circulation models under uncertainty

    Science.gov (United States)

    Schiavazzi, Daniele E.; Baretta, Alessia; Pennati, Giancarlo; Hsia, Tain-Yen; Marsden, Alison L.

    2017-01-01

    Summary Computational models of cardiovascular physiology can inform clinical decision-making, providing a physically consistent framework to assess vascular pressures and flow distributions, and aiding in treatment planning. In particular, lumped parameter network (LPN) models that make an analogy to electrical circuits offer a fast and surprisingly realistic method to reproduce the circulatory physiology. The complexity of LPN models can vary significantly to account, for example, for cardiac and valve function, respiration, autoregulation, and time-dependent hemodynamics. More complex models provide insight into detailed physiological mechanisms, but their utility is maximized if one can quickly identify patient specific parameters. The clinical utility of LPN models with many parameters will be greatly enhanced by automated parameter identification, particularly if parameter tuning can match non-invasively obtained clinical data. We present a framework for automated tuning of 0D lumped model parameters to match clinical data. We demonstrate the utility of this framework through application to single ventricle pediatric patients with Norwood physiology. Through a combination of local identifiability, Bayesian estimation and maximum a posteriori simplex optimization, we show the ability to automatically determine physiologically consistent point estimates of the parameters and to quantify uncertainty induced by errors and assumptions in the collected clinical data. We show that multi-level estimation, that is, updating the parameter prior information through sub-model analysis, can lead to a significant reduction in the parameter marginal posterior variance. We first consider virtual patient conditions, with clinical targets generated through model solutions, and second application to a cohort of four single-ventricle patients with Norwood physiology. PMID:27155892

  12. Parameter estimation of compact binaries using the inspiral and ringdown waveforms

    International Nuclear Information System (INIS)

    Luna, Manuel; Sintes, Alicia M

    2006-01-01

    We analyse the problem of parameter estimation for compact binary systems that could be detected by ground-based gravitational wave detectors. So far, this problem has only been dealt with for the inspiral and the ringdown phases separately. In this paper, we combine the information from both signals, and we study the improvement in parameter estimation, at a fixed signal-to-noise ratio, by including the ringdown signal without making any assumption on the merger phase. The study is performed for both initial and advanced LIGO and VIRGO detectors

  13. Low Complexity Parameter Estimation For Off-the-Grid Targets

    KAUST Repository

    Jardak, Seifallah

    2015-10-05

    In multiple-input multiple-output radar, to estimate the reflection coefficient, spatial location, and Doppler shift of a target, a derived cost function is usually evaluated and optimized over a grid of points. The performance of such algorithms is directly affected by the size of the grid: increasing the number of points will enhance the resolution of the algorithm but exponentially increase its complexity. In this work, to estimate the parameters of a target, a reduced complexity super resolution algorithm is proposed. For off-the-grid targets, it uses a low order two dimensional fast Fourier transform to determine a suboptimal solution and then an iterative algorithm to jointly estimate the spatial location and Doppler shift. Simulation results show that the mean square estimation error of the proposed estimators achieve the Cram\\'er-Rao lower bound. © 2015 IEEE.

  14. Effects of censoring on parameter estimates and power in genetic modeling

    NARCIS (Netherlands)

    Derks, Eske M.; Dolan, Conor V.; Boomsma, Dorret I.

    2004-01-01

    Genetic and environmental influences on variance in phenotypic traits may be estimated with normal theory Maximum Likelihood (ML). However, when the assumption of multivariate normality is not met, this method may result in biased parameter estimates and incorrect likelihood ratio tests. We

  15. Effects of censoring on parameter estimates and power in genetic modeling.

    NARCIS (Netherlands)

    Derks, E.M.; Dolan, C.V.; Boomsma, D.I.

    2004-01-01

    Genetic and environmental influences on variance in phenotypic traits may be estimated with normal theory Maximum Likelihood (ML). However, when the assumption of multivariate normality is not met, this method may result in biased parameter estimates and incorrect likelihood ratio tests. We

  16. Improved best estimate plus uncertainty methodology, including advanced validation concepts, to license evolving nuclear reactors

    International Nuclear Information System (INIS)

    Unal, C.; Williams, B.; Hemez, F.; Atamturktur, S.H.; McClure, P.

    2011-01-01

    Research highlights: → The best estimate plus uncertainty methodology (BEPU) is one option in the licensing of nuclear reactors. → The challenges for extending the BEPU method for fuel qualification for an advanced reactor fuel are primarily driven by schedule, the need for data, and the sufficiency of the data. → In this paper we develop an extended BEPU methodology that can potentially be used to address these new challenges in the design and licensing of advanced nuclear reactors. → The main components of the proposed methodology are verification, validation, calibration, and uncertainty quantification. → The methodology includes a formalism to quantify an adequate level of validation (predictive maturity) with respect to existing data, so that required new testing can be minimized, saving cost by demonstrating that further testing will not enhance the quality of the predictive tools. - Abstract: Many evolving nuclear energy technologies use advanced predictive multiscale, multiphysics modeling and simulation (M and S) capabilities to reduce the cost and schedule of design and licensing. Historically, the role of experiments has been as a primary tool for the design and understanding of nuclear system behavior, while M and S played the subordinate role of supporting experiments. In the new era of multiscale, multiphysics computational-based technology development, this role has been reversed. The experiments will still be needed, but they will be performed at different scales to calibrate and validate the models leading to predictive simulations for design and licensing. Minimizing the required number of validation experiments produces cost and time savings. The use of multiscale, multiphysics models introduces challenges in validating these predictive tools - traditional methodologies will have to be modified to address these challenges. This paper gives the basic aspects of a methodology that can potentially be used to address these new challenges in

  17. Improved Shape Parameter Estimation in Pareto Distributed Clutter with Neural Networks

    Directory of Open Access Journals (Sweden)

    José Raúl Machado-Fernández

    2016-12-01

    Full Text Available The main problem faced by naval radars is the elimination of the clutter input which is a distortion signal appearing mixed with target reflections. Recently, the Pareto distribution has been related to sea clutter measurements suggesting that it may provide a better fit than other traditional distributions. The authors propose a new method for estimating the Pareto shape parameter based on artificial neural networks. The solution achieves a precise estimation of the parameter, having a low computational cost, and outperforming the classic method which uses Maximum Likelihood Estimates (MLE. The presented scheme contributes to the development of the NATE detector for Pareto clutter, which uses the knowledge of clutter statistics for improving the stability of the detection, among other applications.

  18. Fast Estimation Method of Space-Time Two-Dimensional Positioning Parameters Based on Hadamard Product

    Directory of Open Access Journals (Sweden)

    Haiwen Li

    2018-01-01

    Full Text Available The estimation speed of positioning parameters determines the effectiveness of the positioning system. The time of arrival (TOA and direction of arrival (DOA parameters can be estimated by the space-time two-dimensional multiple signal classification (2D-MUSIC algorithm for array antenna. However, this algorithm needs much time to complete the two-dimensional pseudo spectral peak search, which makes it difficult to apply in practice. Aiming at solving this problem, a fast estimation method of space-time two-dimensional positioning parameters based on Hadamard product is proposed in orthogonal frequency division multiplexing (OFDM system, and the Cramer-Rao bound (CRB is also presented. Firstly, according to the channel frequency domain response vector of each array, the channel frequency domain estimation vector is constructed using the Hadamard product form containing location information. Then, the autocorrelation matrix of the channel response vector for the extended array element in frequency domain and the noise subspace are calculated successively. Finally, by combining the closed-form solution and parameter pairing, the fast joint estimation for time delay and arrival direction is accomplished. The theoretical analysis and simulation results show that the proposed algorithm can significantly reduce the computational complexity and guarantee that the estimation accuracy is not only better than estimating signal parameters via rotational invariance techniques (ESPRIT algorithm and 2D matrix pencil (MP algorithm but also close to 2D-MUSIC algorithm. Moreover, the proposed algorithm also has certain adaptability to multipath environment and effectively improves the ability of fast acquisition of location parameters.

  19. Blind Compressed Sensing Parameter Estimation of Non-cooperative Frequency Hopping Signal

    Directory of Open Access Journals (Sweden)

    Chen Ying

    2016-10-01

    Full Text Available To overcome the disadvantages of a non-cooperative frequency hopping communication system, such as a high sampling rate and inadequate prior information, parameter estimation based on Blind Compressed Sensing (BCS is proposed. The signal is precisely reconstructed by the alternating iteration of sparse coding and basis updating, and the hopping frequencies are directly estimated based on the results. Compared with conventional compressive sensing, blind compressed sensing does not require prior information of the frequency hopping signals; hence, it offers an effective solution to the inadequate prior information problem. In the proposed method, the signal is first modeled and then reconstructed by Orthonormal Block Diagonal Blind Compressed Sensing (OBD-BCS, and the hopping frequencies and hop period are finally estimated. The simulation results suggest that the proposed method can reconstruct and estimate the parameters of noncooperative frequency hopping signals with a low signal-to-noise ratio.

  20. Parameter Estimation in Probit Model for Multivariate Multinomial Response Using SMLE

    Directory of Open Access Journals (Sweden)

    Jaka Nugraha

    2012-02-01

    Full Text Available In  the  research  field  of  transportation,  market  research and  politics,  often involving  the  response  of  the multinomial multivariate  observations.  In  this  paper, we discused  a  modeling  of  multivariate  multinomial  responses  using  probit  model.  The estimated  parameters  were  calculated  using Maximum  Likelihood  Estimations  (MLE based  on  the  GHK  simulation.  method  known  as Simulated  Maximum  Likelihood Estimations (SMLE. Likelihood function on the Probit model contains probability values that must be resolved by simulation. By using  the GHK simulation algorithm,  the estimator equation has been obtained for the parameters in the model Probit  Keywords : Probit Model, Newton-Raphson Iteration,  GHK simulator, MLE, simulated log-likelihood

  1. Chloramine demand estimation using surrogate chemical and microbiological parameters.

    Science.gov (United States)

    Moradi, Sina; Liu, Sanly; Chow, Christopher W K; van Leeuwen, John; Cook, David; Drikas, Mary; Amal, Rose

    2017-07-01

    A model is developed to enable estimation of chloramine demand in full scale drinking water supplies based on chemical and microbiological factors that affect chloramine decay rate via nonlinear regression analysis method. The model is based on organic character (specific ultraviolet absorbance (SUVA)) of the water samples and a laboratory measure of the microbiological (F m ) decay of chloramine. The applicability of the model for estimation of chloramine residual (and hence chloramine demand) was tested on several waters from different water treatment plants in Australia through statistical test analysis between the experimental and predicted data. Results showed that the model was able to simulate and estimate chloramine demand at various times in real drinking water systems. To elucidate the loss of chloramine over the wide variation of water quality used in this study, the model incorporates both the fast and slow chloramine decay pathways. The significance of estimated fast and slow decay rate constants as the kinetic parameters of the model for three water sources in Australia was discussed. It was found that with the same water source, the kinetic parameters remain the same. This modelling approach has the potential to be used by water treatment operators as a decision support tool in order to manage chloramine disinfection. Copyright © 2017. Published by Elsevier B.V.

  2. Penalized Nonlinear Least Squares Estimation of Time-Varying Parameters in Ordinary Differential Equations

    KAUST Repository

    Cao, Jiguo; Huang, Jianhua Z.; Wu, Hulin

    2012-01-01

    Ordinary differential equations (ODEs) are widely used in biomedical research and other scientific areas to model complex dynamic systems. It is an important statistical problem to estimate parameters in ODEs from noisy observations. In this article we propose a method for estimating the time-varying coefficients in an ODE. Our method is a variation of the nonlinear least squares where penalized splines are used to model the functional parameters and the ODE solutions are approximated also using splines. We resort to the implicit function theorem to deal with the nonlinear least squares objective function that is only defined implicitly. The proposed penalized nonlinear least squares method is applied to estimate a HIV dynamic model from a real dataset. Monte Carlo simulations show that the new method can provide much more accurate estimates of functional parameters than the existing two-step local polynomial method which relies on estimation of the derivatives of the state function. Supplemental materials for the article are available online.

  3. Application of Novel Lateral Tire Force Sensors to Vehicle Parameter Estimation of Electric Vehicles

    Directory of Open Access Journals (Sweden)

    Kanghyun Nam

    2015-11-01

    Full Text Available This article presents methods for estimating lateral vehicle velocity and tire cornering stiffness, which are key parameters in vehicle dynamics control, using lateral tire force measurements. Lateral tire forces acting on each tire are directly measured by load-sensing hub bearings that were invented and further developed by NSK Ltd. For estimating the lateral vehicle velocity, tire force models considering lateral load transfer effects are used, and a recursive least square algorithm is adapted to identify the lateral vehicle velocity as an unknown parameter. Using the estimated lateral vehicle velocity, tire cornering stiffness, which is an important tire parameter dominating the vehicle’s cornering responses, is estimated. For the practical implementation, the cornering stiffness estimation algorithm based on a simple bicycle model is developed and discussed. Finally, proposed estimation algorithms were evaluated using experimental test data.

  4. Developments in Sensitivity Methodologies and the Validation of Reactor Physics Calculations

    Directory of Open Access Journals (Sweden)

    Giuseppe Palmiotti

    2012-01-01

    Full Text Available The sensitivity methodologies have been a remarkable story when adopted in the reactor physics field. Sensitivity coefficients can be used for different objectives like uncertainty estimates, design optimization, determination of target accuracy requirements, adjustment of input parameters, and evaluations of the representativity of an experiment with respect to a reference design configuration. A review of the methods used is provided, and several examples illustrate the success of the methodology in reactor physics. A new application as the improvement of nuclear basic parameters using integral experiments is also described.

  5. Development of probabilistic assessment methodology for geologic disposal of radioactive wastes

    International Nuclear Information System (INIS)

    Kimura, H.; Takahashi, T.

    1998-01-01

    The probabilistic assessment methodology is essential to evaluate uncertainties of long-term radiological consequences associated with geologic disposal of radioactive wastes. We have developed a probabilistic assessment methodology to estimate the influences of parameter uncertainties/variabilities. An exposure scenario considered here is based on a groundwater migration scenario. A computer code system GSRW-PSA thus developed is based on a non site-specific model, and consists of a set of sub modules for sampling of model parameters, calculating the release of radionuclides from engineered barriers, calculating the transport of radionuclides through the geosphere, calculating radiation exposures of the public, and calculating the statistical values relating the uncertainties and sensitivities. The results of uncertainty analyses for α-nuclides quantitatively indicate that natural uranium ( 238 U) concentration is suitable for an alternative safety indicator of long-lived radioactive waste disposal, because the estimated range of individual dose equivalent due to 238 U decay chain is narrower that that due to other decay chain ( 237 Np decay chain). It is internationally necessary to have detailed discussion on the PDF of model parameters and the PSA methodology to evaluated the uncertainties due to conceptual models and scenarios. (author)

  6. Utility Estimation for Pediatric Vesicoureteral Reflux: Methodological Considerations Using an Online Survey Platform.

    Science.gov (United States)

    Tejwani, Rohit; Wang, Hsin-Hsiao S; Lloyd, Jessica C; Kokorowski, Paul J; Nelson, Caleb P; Routh, Jonathan C

    2017-03-01

    The advent of online task distribution has opened a new avenue for efficiently gathering community perspectives needed for utility estimation. Methodological consensus for estimating pediatric utilities is lacking, with disagreement over whom to sample, what perspective to use (patient vs parent) and whether instrument induced anchoring bias is significant. We evaluated what methodological factors potentially impact utility estimates for vesicoureteral reflux. Cross-sectional surveys using a time trade-off instrument were conducted via the Amazon Mechanical Turk® (https://www.mturk.com) online interface. Respondents were randomized to answer questions from child, parent or dyad perspectives on the utility of a vesicoureteral reflux health state and 1 of 3 "warm-up" scenarios (paralysis, common cold, none) before a vesicoureteral reflux scenario. Utility estimates and potential predictors were fitted to a generalized linear model to determine what factors most impacted utilities. A total of 1,627 responses were obtained. Mean respondent age was 34.9 years. Of the respondents 48% were female, 38% were married and 44% had children. Utility values were uninfluenced by child/personal vesicoureteral reflux/urinary tract infection history, income or race. Utilities were affected by perspective and were higher in the child group (34% lower in parent vs child, p pediatric conditions. Copyright © 2017 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  7. Facial motion parameter estimation and error criteria in model-based image coding

    Science.gov (United States)

    Liu, Yunhai; Yu, Lu; Yao, Qingdong

    2000-04-01

    Model-based image coding has been given extensive attention due to its high subject image quality and low bit-rates. But the estimation of object motion parameter is still a difficult problem, and there is not a proper error criteria for the quality assessment that are consistent with visual properties. This paper presents an algorithm of the facial motion parameter estimation based on feature point correspondence and gives the motion parameter error criteria. The facial motion model comprises of three parts. The first part is the global 3-D rigid motion of the head, the second part is non-rigid translation motion in jaw area, and the third part consists of local non-rigid expression motion in eyes and mouth areas. The feature points are automatically selected by a function of edges, brightness and end-node outside the blocks of eyes and mouth. The numbers of feature point are adjusted adaptively. The jaw translation motion is tracked by the changes of the feature point position of jaw. The areas of non-rigid expression motion can be rebuilt by using block-pasting method. The estimation approach of motion parameter error based on the quality of reconstructed image is suggested, and area error function and the error function of contour transition-turn rate are used to be quality criteria. The criteria reflect the image geometric distortion caused by the error of estimated motion parameters properly.

  8. Bayesian Estimation of the Scale Parameter of Inverse Weibull Distribution under the Asymmetric Loss Functions

    Directory of Open Access Journals (Sweden)

    Farhad Yahgmaei

    2013-01-01

    Full Text Available This paper proposes different methods of estimating the scale parameter in the inverse Weibull distribution (IWD. Specifically, the maximum likelihood estimator of the scale parameter in IWD is introduced. We then derived the Bayes estimators for the scale parameter in IWD by considering quasi, gamma, and uniform priors distributions under the square error, entropy, and precautionary loss functions. Finally, the different proposed estimators have been compared by the extensive simulation studies in corresponding the mean square errors and the evolution of risk functions.

  9. Estimates Of Genetic Parameters Of Body Weights Of Different ...

    African Journals Online (AJOL)

    four (44) farrowings were used to estimate the genetic parameters (heritability and repeatability) of body weight of pigs. Results obtained from the study showed that the heritability (h2) of birth and weaning weights were moderate (0.33±0.16 ...

  10. Visco-piezo-elastic parameter estimation in laminated plate structures

    DEFF Research Database (Denmark)

    Araujo, A. L.; Mota Soares, C. M.; Herskovits, J.

    2009-01-01

    A parameter estimation technique is presented in this article, for identification of elastic, piezoelectric and viscoelastic properties of active laminated composite plates with surface-bonded piezoelectric patches. The inverse method presented uses experimental data in the form of a set of measu...

  11. Re-estimating temperature-dependent consumption parameters in bioenergetics models for juvenile Chinook salmon

    Science.gov (United States)

    Plumb, John M.; Moffitt, Christine M.

    2015-01-01

    Researchers have cautioned against the borrowing of consumption and growth parameters from other species and life stages in bioenergetics growth models. In particular, the function that dictates temperature dependence in maximum consumption (Cmax) within the Wisconsin bioenergetics model for Chinook Salmon Oncorhynchus tshawytscha produces estimates that are lower than those measured in published laboratory feeding trials. We used published and unpublished data from laboratory feeding trials with subyearling Chinook Salmon from three stocks (Snake, Nechako, and Big Qualicum rivers) to estimate and adjust the model parameters for temperature dependence in Cmax. The data included growth measures in fish ranging from 1.5 to 7.2 g that were held at temperatures from 14°C to 26°C. Parameters for temperature dependence in Cmax were estimated based on relative differences in food consumption, and bootstrapping techniques were then used to estimate the error about the parameters. We found that at temperatures between 17°C and 25°C, the current parameter values did not match the observed data, indicating that Cmax should be shifted by about 4°C relative to the current implementation under the bioenergetics model. We conclude that the adjusted parameters for Cmax should produce more accurate predictions from the bioenergetics model for subyearling Chinook Salmon.

  12. An evolutionary firefly algorithm for the estimation of nonlinear biological model parameters.

    Directory of Open Access Journals (Sweden)

    Afnizanfaizal Abdullah

    Full Text Available The development of accurate computational models of biological processes is fundamental to computational systems biology. These models are usually represented by mathematical expressions that rely heavily on the system parameters. The measurement of these parameters is often difficult. Therefore, they are commonly estimated by fitting the predicted model to the experimental data using optimization methods. The complexity and nonlinearity of the biological processes pose a significant challenge, however, to the development of accurate and fast optimization methods. We introduce a new hybrid optimization method incorporating the Firefly Algorithm and the evolutionary operation of the Differential Evolution method. The proposed method improves solutions by neighbourhood search using evolutionary procedures. Testing our method on models for the arginine catabolism and the negative feedback loop of the p53 signalling pathway, we found that it estimated the parameters with high accuracy and within a reasonable computation time compared to well-known approaches, including Particle Swarm Optimization, Nelder-Mead, and Firefly Algorithm. We have also verified the reliability of the parameters estimated by the method using an a posteriori practical identifiability test.

  13. An evolutionary firefly algorithm for the estimation of nonlinear biological model parameters.

    Science.gov (United States)

    Abdullah, Afnizanfaizal; Deris, Safaai; Anwar, Sohail; Arjunan, Satya N V

    2013-01-01

    The development of accurate computational models of biological processes is fundamental to computational systems biology. These models are usually represented by mathematical expressions that rely heavily on the system parameters. The measurement of these parameters is often difficult. Therefore, they are commonly estimated by fitting the predicted model to the experimental data using optimization methods. The complexity and nonlinearity of the biological processes pose a significant challenge, however, to the development of accurate and fast optimization methods. We introduce a new hybrid optimization method incorporating the Firefly Algorithm and the evolutionary operation of the Differential Evolution method. The proposed method improves solutions by neighbourhood search using evolutionary procedures. Testing our method on models for the arginine catabolism and the negative feedback loop of the p53 signalling pathway, we found that it estimated the parameters with high accuracy and within a reasonable computation time compared to well-known approaches, including Particle Swarm Optimization, Nelder-Mead, and Firefly Algorithm. We have also verified the reliability of the parameters estimated by the method using an a posteriori practical identifiability test.

  14. A Bayesian framework for parameter estimation in dynamical models.

    Directory of Open Access Journals (Sweden)

    Flávio Codeço Coelho

    Full Text Available Mathematical models in biology are powerful tools for the study and exploration of complex dynamics. Nevertheless, bringing theoretical results to an agreement with experimental observations involves acknowledging a great deal of uncertainty intrinsic to our theoretical representation of a real system. Proper handling of such uncertainties is key to the successful usage of models to predict experimental or field observations. This problem has been addressed over the years by many tools for model calibration and parameter estimation. In this article we present a general framework for uncertainty analysis and parameter estimation that is designed to handle uncertainties associated with the modeling of dynamic biological systems while remaining agnostic as to the type of model used. We apply the framework to fit an SIR-like influenza transmission model to 7 years of incidence data in three European countries: Belgium, the Netherlands and Portugal.

  15. Improved Accuracy of Nonlinear Parameter Estimation with LAV and Interval Arithmetic Methods

    Directory of Open Access Journals (Sweden)

    Humberto Muñoz

    2009-06-01

    Full Text Available The reliable solution of nonlinear parameter es- timation problems is an important computational problem in many areas of science and engineering, including such applications as real time optimization. Its goal is to estimate accurate model parameters that provide the best fit to measured data, despite small- scale noise in the data or occasional large-scale mea- surement errors (outliers. In general, the estimation techniques are based on some kind of least squares or maximum likelihood criterion, and these require the solution of a nonlinear and non-convex optimiza- tion problem. Classical solution methods for these problems are local methods, and may not be reliable for finding the global optimum, with no guarantee the best model parameters have been found. Interval arithmetic can be used to compute completely and reliably the global optimum for the nonlinear para- meter estimation problem. Finally, experimental re- sults will compare the least squares, l2, and the least absolute value, l1, estimates using interval arithmetic in a chemical engineering application.

  16. Methodological Framework for World Health Organization Estimates of the Global Burden of Foodborne Disease.

    Directory of Open Access Journals (Sweden)

    Brecht Devleesschauwer

    Full Text Available The Foodborne Disease Burden Epidemiology Reference Group (FERG was established in 2007 by the World Health Organization to estimate the global burden of foodborne diseases (FBDs. This paper describes the methodological framework developed by FERG's Computational Task Force to transform epidemiological information into FBD burden estimates.The global and regional burden of 31 FBDs was quantified, along with limited estimates for 5 other FBDs, using Disability-Adjusted Life Years in a hazard- and incidence-based approach. To accomplish this task, the following workflow was defined: outline of disease models and collection of epidemiological data; design and completion of a database template; development of an imputation model; identification of disability weights; probabilistic burden assessment; and estimating the proportion of the disease burden by each hazard that is attributable to exposure by food (i.e., source attribution. All computations were performed in R and the different functions were compiled in the R package 'FERG'. Traceability and transparency were ensured by sharing results and methods in an interactive way with all FERG members throughout the process.We developed a comprehensive framework for estimating the global burden of FBDs, in which methodological simplicity and transparency were key elements. All the tools developed have been made available and can be translated into a user-friendly national toolkit for studying and monitoring food safety at the local level.

  17. Rapid estimation of high-parameter auditory-filter shapes

    Science.gov (United States)

    Shen, Yi; Sivakumar, Rajeswari; Richards, Virginia M.

    2014-01-01

    A Bayesian adaptive procedure, the quick-auditory-filter (qAF) procedure, was used to estimate auditory-filter shapes that were asymmetric about their peaks. In three experiments, listeners who were naive to psychoacoustic experiments detected a fixed-level, pure-tone target presented with a spectrally notched noise masker. The qAF procedure adaptively manipulated the masker spectrum level and the position of the masker notch, which was optimized for the efficient estimation of the five parameters of an auditory-filter model. Experiment I demonstrated that the qAF procedure provided a convergent estimate of the auditory-filter shape at 2 kHz within 150 to 200 trials (approximately 15 min to complete) and, for a majority of listeners, excellent test-retest reliability. In experiment II, asymmetric auditory filters were estimated for target frequencies of 1 and 4 kHz and target levels of 30 and 50 dB sound pressure level. The estimated filter shapes were generally consistent with published norms, especially at the low target level. It is known that the auditory-filter estimates are narrower for forward masking than simultaneous masking due to peripheral suppression, a result replicated in experiment III using fewer than 200 qAF trials. PMID:25324086

  18. A Particle Smoother with Sequential Importance Resampling for soil hydraulic parameter estimation: A lysimeter experiment

    Science.gov (United States)

    Montzka, Carsten; Hendricks Franssen, Harrie-Jan; Moradkhani, Hamid; Pütz, Thomas; Han, Xujun; Vereecken, Harry

    2013-04-01

    An adequate description of soil hydraulic properties is essential for a good performance of hydrological forecasts. So far, several studies showed that data assimilation could reduce the parameter uncertainty by considering soil moisture observations. However, these observations and also the model forcings were recorded with a specific measurement error. It seems a logical step to base state updating and parameter estimation on observations made at multiple time steps, in order to reduce the influence of outliers at single time steps given measurement errors and unknown model forcings. Such outliers could result in erroneous state estimation as well as inadequate parameters. This has been one of the reasons to use a smoothing technique as implemented for Bayesian data assimilation methods such as the Ensemble Kalman Filter (i.e. Ensemble Kalman Smoother). Recently, an ensemble-based smoother has been developed for state update with a SIR particle filter. However, this method has not been used for dual state-parameter estimation. In this contribution we present a Particle Smoother with sequentially smoothing of particle weights for state and parameter resampling within a time window as opposed to the single time step data assimilation used in filtering techniques. This can be seen as an intermediate variant between a parameter estimation technique using global optimization with estimation of single parameter sets valid for the whole period, and sequential Monte Carlo techniques with estimation of parameter sets evolving from one time step to another. The aims are i) to improve the forecast of evaporation and groundwater recharge by estimating hydraulic parameters, and ii) to reduce the impact of single erroneous model inputs/observations by a smoothing method. In order to validate the performance of the proposed method in a real world application, the experiment is conducted in a lysimeter environment.

  19. A PROPOSED METHODOLOGY FOR ESTIMATING ECOREGIONAL VALUES FOR OUTDOOR RECREATION IN THE UNITED STATES

    OpenAIRE

    Bhat, Gajanan; Bergstrom, John C.; Bowker, James Michael; Cordell, H. Ken

    1996-01-01

    This paper provides a methodology for the estimation of recreational demand functions and values using an ecoregional approach. Ten ecoregions in the continental US were defined based on similarly functioning ecosystem characters. The individual travel cost method was employed to estimate the recreational demand functions for activities such as motorboating and waterskiing, developed and primative camping, coldwater fishing, sightseeing and pleasure driving, and big game hunting for each ecor...

  20. Power capability evaluation for lithium iron phosphate batteries based on multi-parameter constraints estimation

    Science.gov (United States)

    Wang, Yujie; Pan, Rui; Liu, Chang; Chen, Zonghai; Ling, Qiang

    2018-01-01

    The battery power capability is intimately correlated with the climbing, braking and accelerating performance of the electric vehicles. Accurate power capability prediction can not only guarantee the safety but also regulate driving behavior and optimize battery energy usage. However, the nonlinearity of the battery model is very complex especially for the lithium iron phosphate batteries. Besides, the hysteresis loop in the open-circuit voltage curve is easy to cause large error in model prediction. In this work, a multi-parameter constraints dynamic estimation method is proposed to predict the battery continuous period power capability. A high-fidelity battery model which considers the battery polarization and hysteresis phenomenon is presented to approximate the high nonlinearity of the lithium iron phosphate battery. Explicit analyses of power capability with multiple constraints are elaborated, specifically the state-of-energy is considered in power capability assessment. Furthermore, to solve the problem of nonlinear system state estimation, and suppress noise interference, the UKF based state observer is employed for power capability prediction. The performance of the proposed methodology is demonstrated by experiments under different dynamic characterization schedules. The charge and discharge power capabilities of the lithium iron phosphate batteries are quantitatively assessed under different time scales and temperatures.

  1. A Multi-Year Study on Rice Morphological Parameter Estimation with X-Band Polsar Data

    Directory of Open Access Journals (Sweden)

    Onur Yuzugullu

    2017-06-01

    Full Text Available Rice fields have been monitored with spaceborne Synthetic Aperture Radar (SAR systems for decades. SAR is an essential source of data and allows for the estimation of plant properties such as canopy height, leaf area index, phenological phase, and yield. However, the information on detailed plant morphology in meter-scale resolution is necessary for the development of better management practices. This letter presents the results of the procedure that estimates the stalk height, leaf length and leaf width of rice fields from a copolar X-band TerraSAR-X time series data based on a priori phenological phase. The methodology includes a computationally efficient stochastic inversion algorithm of a metamodel that mimics a radiative transfer theory-driven electromagnetic scattering (EM model. The EM model and its metamodel are employed to simulate the backscattering intensities from flooded rice fields based on their simplified physical structures. The results of the inversion procedure are found to be accurate for cultivation seasons from 2013 to 2015 with root mean square errors less than 13.5 cm for stalk height, 7 cm for leaf length, and 4 mm for leaf width parameters. The results of this research provided new perspectives on the use of EM models and computationally efficient metamodels for agriculture management practices.

  2. A Sparse Bayesian Learning Algorithm With Dictionary Parameter Estimation

    DEFF Research Database (Denmark)

    Hansen, Thomas Lundgaard; Badiu, Mihai Alin; Fleury, Bernard Henri

    2014-01-01

    This paper concerns sparse decomposition of a noisy signal into atoms which are specified by unknown continuous-valued parameters. An example could be estimation of the model order, frequencies and amplitudes of a superposition of complex sinusoids. The common approach is to reduce the continuous...

  3. MPEG2 video parameter and no reference PSNR estimation

    DEFF Research Database (Denmark)

    Li, Huiying; Forchhammer, Søren

    2009-01-01

    MPEG coded video may be processed for quality assessment or postprocessed to reduce coding artifacts or transcoded. Utilizing information about the MPEG stream may be useful for these tasks. This paper deals with estimating MPEG parameter information from the decoded video stream without access t...

  4. Parameter estimation with bio-inspired meta-heuristic optimization: modeling the dynamics of endocytosis

    Directory of Open Access Journals (Sweden)

    Tashkova Katerina

    2011-10-01

    Full Text Available Abstract Background We address the task of parameter estimation in models of the dynamics of biological systems based on ordinary differential equations (ODEs from measured data, where the models are typically non-linear and have many parameters, the measurements are imperfect due to noise, and the studied system can often be only partially observed. A representative task is to estimate the parameters in a model of the dynamics of endocytosis, i.e., endosome maturation, reflected in a cut-out switch transition between the Rab5 and Rab7 domain protein concentrations, from experimental measurements of these concentrations. The general parameter estimation task and the specific instance considered here are challenging optimization problems, calling for the use of advanced meta-heuristic optimization methods, such as evolutionary or swarm-based methods. Results We apply three global-search meta-heuristic algorithms for numerical optimization, i.e., differential ant-stigmergy algorithm (DASA, particle-swarm optimization (PSO, and differential evolution (DE, as well as a local-search derivative-based algorithm 717 (A717 to the task of estimating parameters in ODEs. We evaluate their performance on the considered representative task along a number of metrics, including the quality of reconstructing the system output and the complete dynamics, as well as the speed of convergence, both on real-experimental data and on artificial pseudo-experimental data with varying amounts of noise. We compare the four optimization methods under a range of observation scenarios, where data of different completeness and accuracy of interpretation are given as input. Conclusions Overall, the global meta-heuristic methods (DASA, PSO, and DE clearly and significantly outperform the local derivative-based method (A717. Among the three meta-heuristics, differential evolution (DE performs best in terms of the objective function, i.e., reconstructing the output, and in terms of

  5. Parameter estimation with bio-inspired meta-heuristic optimization: modeling the dynamics of endocytosis.

    Science.gov (United States)

    Tashkova, Katerina; Korošec, Peter; Silc, Jurij; Todorovski, Ljupčo; Džeroski, Sašo

    2011-10-11

    We address the task of parameter estimation in models of the dynamics of biological systems based on ordinary differential equations (ODEs) from measured data, where the models are typically non-linear and have many parameters, the measurements are imperfect due to noise, and the studied system can often be only partially observed. A representative task is to estimate the parameters in a model of the dynamics of endocytosis, i.e., endosome maturation, reflected in a cut-out switch transition between the Rab5 and Rab7 domain protein concentrations, from experimental measurements of these concentrations. The general parameter estimation task and the specific instance considered here are challenging optimization problems, calling for the use of advanced meta-heuristic optimization methods, such as evolutionary or swarm-based methods. We apply three global-search meta-heuristic algorithms for numerical optimization, i.e., differential ant-stigmergy algorithm (DASA), particle-swarm optimization (PSO), and differential evolution (DE), as well as a local-search derivative-based algorithm 717 (A717) to the task of estimating parameters in ODEs. We evaluate their performance on the considered representative task along a number of metrics, including the quality of reconstructing the system output and the complete dynamics, as well as the speed of convergence, both on real-experimental data and on artificial pseudo-experimental data with varying amounts of noise. We compare the four optimization methods under a range of observation scenarios, where data of different completeness and accuracy of interpretation are given as input. Overall, the global meta-heuristic methods (DASA, PSO, and DE) clearly and significantly outperform the local derivative-based method (A717). Among the three meta-heuristics, differential evolution (DE) performs best in terms of the objective function, i.e., reconstructing the output, and in terms of convergence. These results hold for both real and

  6. Estimation of population pharmacokinetic parameters of saquinavir in HIV patients with the MONOLIX software.

    Science.gov (United States)

    Lavielle, Marc; Mentré, France

    2007-04-01

    In nonlinear mixed-effects models, estimation methods based on a linearization of the likelihood are widely used although they have several methodological drawbacks. Kuhn and Lavielle (Comput. Statist. Data Anal. 49:1020-1038 (2005)) developed an estimation method which combines the SAEM (Stochastic Approximation EM) algorithm, with a MCMC (Markov Chain Monte Carlo) procedure for maximum likelihood estimation in nonlinear mixed-effects models without linearization. This method is implemented in the Matlab software MONOLIX which is available at http://www.math.u-psud.fr/~lavielle/monolix/logiciels. In this paper we apply MONOLIX to the analysis of the pharmacokinetics of saquinavir, a protease inhibitor, from concentrations measured after single dose administration in 100 HIV patients, some with advance disease. We also illustrate how to use MONOLIX to build the covariate model using the Bayesian Information Criterion. Saquinavir oral clearance (CL/F) was estimated to be 1.26 L/h and to increase with body mass index, the inter-patient variability for CL/F being 120%. Several methodological developments are ongoing to extend SAEM which is a very promising estimation method for population pharmacockinetic/pharmacodynamic analyses.

  7. Data Handling and Parameter Estimation

    DEFF Research Database (Denmark)

    Sin, Gürkan; Gernaey, Krist

    2016-01-01

    ,engineers, and professionals. However, it is also expected that they will be useful both for graduate teaching as well as a stepping stone for academic researchers who wish to expand their theoretical interest in the subject. For the models selected to interpret the experimental data, this chapter uses available models from...... literature that are mostly based on the ActivatedSludge Model (ASM) framework and their appropriate extensions (Henze et al., 2000).The chapter presents an overview of the most commonly used methods in the estimation of parameters from experimental batch data, namely: (i) data handling and validation, (ii......Modelling is one of the key tools at the disposal of modern wastewater treatment professionals, researchers and engineers. It enables them to study and understand complex phenomena underlying the physical, chemical and biological performance of wastewater treatment plants at different temporal...

  8. A Capacitance-Based Methodology for the Estimation of Piezoelectric Coefficients of Poled Piezoelectric Materials

    KAUST Repository

    Al Ahmad, Mahmoud; Alshareef, Husam N.

    2010-01-01

    A methodology is proposed to estimate the piezoelectric coefficients of bulk piezoelectric materials using simple capacitance measurements. The extracted values of d33 and d31 from the capacitance measurements were 506 pC/N and 247 p

  9. The influence of different PAST-based subspace trackers on DaPT parameter estimation

    Science.gov (United States)

    Lechtenberg, M.; Götze, J.

    2012-09-01

    In the context of parameter estimation, subspace-based methods like ESPRIT have become common. They require a subspace separation e.g. based on eigenvalue/-vector decomposition. In time-varying environments, this can be done by subspace trackers. One class of these is based on the PAST algorithm. Our non-linear parameter estimation algorithm DaPT builds on-top of the ESPRIT algorithm. Evaluation of the different variants of the PAST algorithm shows which variant of the PAST algorithm is worthwhile in the context of frequency estimation.

  10. Prediction of dosage-based parameters from the puff dispersion of airborne materials in urban environments using the CFD-RANS methodology

    Science.gov (United States)

    Efthimiou, G. C.; Andronopoulos, S.; Bartzis, J. G.

    2018-02-01

    One of the key issues of recent research on the dispersion inside complex urban environments is the ability to predict dosage-based parameters from the puff release of an airborne material from a point source in the atmospheric boundary layer inside the built-up area. The present work addresses the question of whether the computational fluid dynamics (CFD)-Reynolds-averaged Navier-Stokes (RANS) methodology can be used to predict ensemble-average dosage-based parameters that are related with the puff dispersion. RANS simulations with the ADREA-HF code were, therefore, performed, where a single puff was released in each case. The present method is validated against the data sets from two wind-tunnel experiments. In each experiment, more than 200 puffs were released from which ensemble-averaged dosage-based parameters were calculated and compared to the model's predictions. The performance of the model was evaluated using scatter plots and three validation metrics: fractional bias, normalized mean square error, and factor of two. The model presented a better performance for the temporal parameters (i.e., ensemble-average times of puff arrival, peak, leaving, duration, ascent, and descent) than for the ensemble-average dosage and peak concentration. The majority of the obtained values of validation metrics were inside established acceptance limits. Based on the obtained model performance indices, the CFD-RANS methodology as implemented in the code ADREA-HF is able to predict the ensemble-average temporal quantities related to transient emissions of airborne material in urban areas within the range of the model performance acceptance criteria established in the literature. The CFD-RANS methodology as implemented in the code ADREA-HF is also able to predict the ensemble-average dosage, but the dosage results should be treated with some caution; as in one case, the observed ensemble-average dosage was under-estimated slightly more than the acceptance criteria. Ensemble

  11. Estimation of Medium Voltage Cable Parameters for PD Detection

    DEFF Research Database (Denmark)

    Villefrance, Rasmus; Holbøll, Joachim T.; Henriksen, Mogens

    1998-01-01

    Medium voltage cable characteristics have been determined with respect to the parameters having influence on the evaluation of results from PD-measurements on paper/oil and XLPE-cables. In particular, parameters essential for discharge quantification and location were measured. In order to relate...... and phase constants. A method to estimate this propagation constant, based on high frequency measurements, will be presented and will be applied to different cable types under different conditions. The influence of temperature and test voltage was investigated. The relevance of the results for cable...

  12. Efficient fuzzy Bayesian inference algorithms for incorporating expert knowledge in parameter estimation

    Science.gov (United States)

    Rajabi, Mohammad Mahdi; Ataie-Ashtiani, Behzad

    2016-05-01

    Bayesian inference has traditionally been conceived as the proper framework for the formal incorporation of expert knowledge in parameter estimation of groundwater models. However, conventional Bayesian inference is incapable of taking into account the imprecision essentially embedded in expert provided information. In order to solve this problem, a number of extensions to conventional Bayesian inference have been introduced in recent years. One of these extensions is 'fuzzy Bayesian inference' which is the result of integrating fuzzy techniques into Bayesian statistics. Fuzzy Bayesian inference has a number of desirable features which makes it an attractive approach for incorporating expert knowledge in the parameter estimation process of groundwater models: (1) it is well adapted to the nature of expert provided information, (2) it allows to distinguishably model both uncertainty and imprecision, and (3) it presents a framework for fusing expert provided information regarding the various inputs of the Bayesian inference algorithm. However an important obstacle in employing fuzzy Bayesian inference in groundwater numerical modeling applications is the computational burden, as the required number of numerical model simulations often becomes extremely exhaustive and often computationally infeasible. In this paper, a novel approach of accelerating the fuzzy Bayesian inference algorithm is proposed which is based on using approximate posterior distributions derived from surrogate modeling, as a screening tool in the computations. The proposed approach is first applied to a synthetic test case of seawater intrusion (SWI) in a coastal aquifer. It is shown that for this synthetic test case, the proposed approach decreases the number of required numerical simulations by an order of magnitude. Then the proposed approach is applied to a real-world test case involving three-dimensional numerical modeling of SWI in Kish Island, located in the Persian Gulf. An expert

  13. Development of the methodology for estimation of dose from a source

    International Nuclear Information System (INIS)

    Golebaone, E.M.

    2012-04-01

    The geometry of a source plays an important role when determining which method to apply in order to accurately estimate dose from a source. If wrong source geometry is used the dose received may be underestimated or overestimated therefore this may lead to wrong decision in dealing with the exposure situation. In this project moisture density gauge was used to represent a point source in order to demonstrate the key parameters to be used when estimating dose from point source. The parameters to be considered are activity of the source, the ambient dose rate, gamma constant for the radionuclide, as well as the transport index on the package of the source. The distance from the source, and the time spent in the radiation field must be known in order to calculate the dose. (author)

  14. Parameter estimation for stiff deterministic dynamical systems via ensemble Kalman filter

    International Nuclear Information System (INIS)

    Arnold, Andrea; Calvetti, Daniela; Somersalo, Erkki

    2014-01-01

    A commonly encountered problem in numerous areas of applications is to estimate the unknown coefficients of a dynamical system from direct or indirect observations at discrete times of some of the components of the state vector. A related problem is to estimate unobserved components of the state. An egregious example of such a problem is provided by metabolic models, in which the numerous model parameters and the concentrations of the metabolites in tissue are to be estimated from concentration data in the blood. A popular method for addressing similar questions in stochastic and turbulent dynamics is the ensemble Kalman filter (EnKF), a particle-based filtering method that generalizes classical Kalman filtering. In this work, we adapt the EnKF algorithm for deterministic systems in which the numerical approximation error is interpreted as a stochastic drift with variance based on classical error estimates of numerical integrators. This approach, which is particularly suitable for stiff systems where the stiffness may depend on the parameters, allows us to effectively exploit the parallel nature of particle methods. Moreover, we demonstrate how spatial prior information about the state vector, which helps the stability of the computed solution, can be incorporated into the filter. The viability of the approach is shown by computed examples, including a metabolic system modeling an ischemic episode in skeletal muscle, with a high number of unknown parameters. (paper)

  15. Methodological framework for World Health Organization estimates of the global burden of foodborne disease

    NARCIS (Netherlands)

    B. Devleesschauwer (Brecht); J.A. Haagsma (Juanita); F.J. Angulo (Frederick); D.C. Bellinger (David); D. Cole (Dana); D. Döpfer (Dörte); A. Fazil (Aamir); E.M. Fèvre (Eric); H.J. Gibb (Herman); T. Hald (Tine); M.D. Kirk (Martyn); R.J. Lake (Robin); C. Maertens De Noordhout (Charline); C. Mathers (Colin); S.A. McDonald (Scott); S.M. Pires (Sara); N. Speybroeck (Niko); M.K. Thomas (Kate); D. Torgerson; F. Wu (Felicia); A.H. Havelaar (Arie); N. Praet (Nicolas)

    2015-01-01

    textabstractBackground: The Foodborne Disease Burden Epidemiology Reference Group (FERG) was established in 2007 by the World Health Organization to estimate the global burden of foodborne diseases (FBDs). This paper describes the methodological framework developed by FERG's Computational Task Force

  16. A Bayesian consistent dual ensemble Kalman filter for state-parameter estimation in subsurface hydrology

    KAUST Repository

    Ait-El-Fquih, Boujemaa; El Gharamti, Mohamad; Hoteit, Ibrahim

    2016-01-01

    Ensemble Kalman filtering (EnKF) is an efficient approach to addressing uncertainties in subsurface ground-water models. The EnKF sequentially integrates field data into simulation models to obtain a better characterization of the model's state and parameters. These are generally estimated following joint and dual filtering strategies, in which, at each assimilation cycle, a forecast step by the model is followed by an update step with incoming observations. The joint EnKF directly updates the augmented state-parameter vector, whereas the dual EnKF empirically employs two separate filters, first estimating the parameters and then estimating the state based on the updated parameters. To develop a Bayesian consistent dual approach and improve the state-parameter estimates and their consistency, we propose in this paper a one-step-ahead (OSA) smoothing formulation of the state-parameter Bayesian filtering problem from which we derive a new dual-type EnKF, the dual EnKF(OSA). Compared with the standard dual EnKF, it imposes a new update step to the state, which is shown to enhance the performance of the dual approach with almost no increase in the computational cost. Numerical experiments are conducted with a two-dimensional (2-D) synthetic groundwater aquifer model to investigate the performance and robustness of the proposed dual EnKFOSA, and to evaluate its results against those of the joint and dual EnKFs. The proposed scheme is able to successfully recover both the hydraulic head and the aquifer conductivity, providing further reliable estimates of their uncertainties. Furthermore, it is found to be more robust to different assimilation settings, such as the spatial and temporal distribution of the observations, and the level of noise in the data. Based on our experimental setups, it yields up to 25% more accurate state and parameter estimations than the joint and dual approaches.

  17. A Bayesian consistent dual ensemble Kalman filter for state-parameter estimation in subsurface hydrology

    KAUST Repository

    Ait-El-Fquih, Boujemaa

    2016-08-12

    Ensemble Kalman filtering (EnKF) is an efficient approach to addressing uncertainties in subsurface ground-water models. The EnKF sequentially integrates field data into simulation models to obtain a better characterization of the model\\'s state and parameters. These are generally estimated following joint and dual filtering strategies, in which, at each assimilation cycle, a forecast step by the model is followed by an update step with incoming observations. The joint EnKF directly updates the augmented state-parameter vector, whereas the dual EnKF empirically employs two separate filters, first estimating the parameters and then estimating the state based on the updated parameters. To develop a Bayesian consistent dual approach and improve the state-parameter estimates and their consistency, we propose in this paper a one-step-ahead (OSA) smoothing formulation of the state-parameter Bayesian filtering problem from which we derive a new dual-type EnKF, the dual EnKF(OSA). Compared with the standard dual EnKF, it imposes a new update step to the state, which is shown to enhance the performance of the dual approach with almost no increase in the computational cost. Numerical experiments are conducted with a two-dimensional (2-D) synthetic groundwater aquifer model to investigate the performance and robustness of the proposed dual EnKFOSA, and to evaluate its results against those of the joint and dual EnKFs. The proposed scheme is able to successfully recover both the hydraulic head and the aquifer conductivity, providing further reliable estimates of their uncertainties. Furthermore, it is found to be more robust to different assimilation settings, such as the spatial and temporal distribution of the observations, and the level of noise in the data. Based on our experimental setups, it yields up to 25% more accurate state and parameter estimations than the joint and dual approaches.

  18. Regression to fuzziness method for estimation of remaining useful life in power plant components

    Science.gov (United States)

    Alamaniotis, Miltiadis; Grelle, Austin; Tsoukalas, Lefteri H.

    2014-10-01

    Mitigation of severe accidents in power plants requires the reliable operation of all systems and the on-time replacement of mechanical components. Therefore, the continuous surveillance of power systems is a crucial concern for the overall safety, cost control, and on-time maintenance of a power plant. In this paper a methodology called regression to fuzziness is presented that estimates the remaining useful life (RUL) of power plant components. The RUL is defined as the difference between the time that a measurement was taken and the estimated failure time of that component. The methodology aims to compensate for a potential lack of historical data by modeling an expert's operational experience and expertise applied to the system. It initially identifies critical degradation parameters and their associated value range. Once completed, the operator's experience is modeled through fuzzy sets which span the entire parameter range. This model is then synergistically used with linear regression and a component's failure point to estimate the RUL. The proposed methodology is tested on estimating the RUL of a turbine (the basic electrical generating component of a power plant) in three different cases. Results demonstrate the benefits of the methodology for components for which operational data is not readily available and emphasize the significance of the selection of fuzzy sets and the effect of knowledge representation on the predicted output. To verify the effectiveness of the methodology, it was benchmarked against the data-based simple linear regression model used for predictions which was shown to perform equal or worse than the presented methodology. Furthermore, methodology comparison highlighted the improvement in estimation offered by the adoption of appropriate of fuzzy sets for parameter representation.

  19. PARAMETER ESTIMATION OF VALVE STICTION USING ANT COLONY OPTIMIZATION

    Directory of Open Access Journals (Sweden)

    S. Kalaivani

    2012-07-01

    Full Text Available In this paper, a procedure for quantifying valve stiction in control loops based on ant colony optimization has been proposed. Pneumatic control valves are widely used in the process industry. The control valve contains non-linearities such as stiction, backlash, and deadband that in turn cause oscillations in the process output. Stiction is one of the long-standing problems and it is the most severe problem in the control valves. Thus the measurement data from an oscillating control loop can be used as a possible diagnostic signal to provide an estimate of the stiction magnitude. Quantification of control valve stiction is still a challenging issue. Prior to doing stiction detection and quantification, it is necessary to choose a suitable model structure to describe control-valve stiction. To understand the stiction phenomenon, the Stenman model is used. Ant Colony Optimization (ACO, an intelligent swarm algorithm, proves effective in various fields. The ACO algorithm is inspired from the natural trail following behaviour of ants. The parameters of the Stenman model are estimated using ant colony optimization, from the input-output data by minimizing the error between the actual stiction model output and the simulated stiction model output. Using ant colony optimization, Stenman model with known nonlinear structure and unknown parameters can be estimated.

  20. Temporal Parameters Estimation for Wheelchair Propulsion Using Wearable Sensors

    Directory of Open Access Journals (Sweden)

    Manoela Ojeda

    2014-01-01

    Full Text Available Due to lower limb paralysis, individuals with spinal cord injury (SCI rely on their upper limbs for mobility. The prevalence of upper extremity pain and injury is high among this population. We evaluated the performance of three triaxis accelerometers placed on the upper arm, wrist, and under the wheelchair, to estimate temporal parameters of wheelchair propulsion. Twenty-six participants with SCI were asked to push their wheelchair equipped with a SMARTWheel. The estimated stroke number was compared with the criterion from video observations and the estimated push frequency was compared with the criterion from the SMARTWheel. Mean absolute errors (MAE and mean absolute percentage of error (MAPE were calculated. Intraclass correlation coefficients and Bland-Altman plots were used to assess the agreement. Results showed reasonable accuracies especially using the accelerometer placed on the upper arm where the MAPE was 8.0% for stroke number and 12.9% for push frequency. The ICC was 0.994 for stroke number and 0.916 for push frequency. The wrist and seat accelerometer showed lower accuracy with a MAPE for the stroke number of 10.8% and 13.4% and ICC of 0.990 and 0.984, respectively. Results suggested that accelerometers could be an option for monitoring temporal parameters of wheelchair propulsion.

  1. SIMULTANEOUS ESTIMATION OF PHOTOMETRIC REDSHIFTS AND SED PARAMETERS: IMPROVED TECHNIQUES AND A REALISTIC ERROR BUDGET

    International Nuclear Information System (INIS)

    Acquaviva, Viviana; Raichoor, Anand; Gawiser, Eric

    2015-01-01

    We seek to improve the accuracy of joint galaxy photometric redshift estimation and spectral energy distribution (SED) fitting. By simulating different sources of uncorrected systematic errors, we demonstrate that if the uncertainties in the photometric redshifts are estimated correctly, so are those on the other SED fitting parameters, such as stellar mass, stellar age, and dust reddening. Furthermore, we find that if the redshift uncertainties are over(under)-estimated, the uncertainties in SED parameters tend to be over(under)-estimated by similar amounts. These results hold even in the presence of severe systematics and provide, for the first time, a mechanism to validate the uncertainties on these parameters via comparison with spectroscopic redshifts. We propose a new technique (annealing) to re-calibrate the joint uncertainties in the photo-z and SED fitting parameters without compromising the performance of the SED fitting + photo-z estimation. This procedure provides a consistent estimation of the multi-dimensional probability distribution function in SED fitting + z parameter space, including all correlations. While the performance of joint SED fitting and photo-z estimation might be hindered by template incompleteness, we demonstrate that the latter is “flagged” by a large fraction of outliers in redshift, and that significant improvements can be achieved by using flexible stellar populations synthesis models and more realistic star formation histories. In all cases, we find that the median stellar age is better recovered than the time elapsed from the onset of star formation. Finally, we show that using a photometric redshift code such as EAZY to obtain redshift probability distributions that are then used as priors for SED fitting codes leads to only a modest bias in the SED fitting parameters and is thus a viable alternative to the simultaneous estimation of SED parameters and photometric redshifts

  2. Modified Moment, Maximum Likelihood and Percentile Estimators for the Parameters of the Power Function Distribution

    Directory of Open Access Journals (Sweden)

    Azam Zaka

    2014-10-01

    Full Text Available This paper is concerned with the modifications of maximum likelihood, moments and percentile estimators of the two parameter Power function distribution. Sampling behavior of the estimators is indicated by Monte Carlo simulation. For some combinations of parameter values, some of the modified estimators appear better than the traditional maximum likelihood, moments and percentile estimators with respect to bias, mean square error and total deviation.

  3. Methodology for determining the value of complexity parameter for emergency situation during driving of the train

    Directory of Open Access Journals (Sweden)

    O. M. Horobchenko

    2015-12-01

    Full Text Available Purpose. During development of intelligent control systems for locomotive there is a need in the evaluation of the current train situation in the terms of traffic safety. In order to estimate the probability of the development of various emergency situations in to the traffic accidents, it is necessary to determine their complexity. The purpose of this paper is to develop the methodology for determining the complexity of emergency situations during the locomotive operation. Methodology. To achieve this purpose the statistical material of traffic safety violations was accumulated. The causes of violations are divided into groups: technical factors, human factors and external influences. Using the theory of hybrid networks it was obtained a model that gives the output complexity parameter of the emergency situation. Network type: multilayer perceptron with hybrid neurons of the first layer and the sigmoid activation function. The methods of the probability theory were used for the analysis of the results. Findings. The approach to the formalization of manufacturing situations that can only be described linguistically was developed, that allowed to use them as input data to the model for emergency situation. It was established and proved that the exponent of complexity for emergency situation during driving the train is a random quantity and obeys to the normal distribution law. It was obtained the graph of the cumulative distribution function, which identified the areas for safe operation and an increased risk of accident. Originality. It was proposed theoretical basis for determining the complexity of emergency situations in the train work and received the maximum complexity value of emergency situations that can be admitted in the operating conditions. Practical value. Constant monitoring of this value allows not only respond to the threat of danger, but also getting it in numerical form and use it as one of the input parameters for the

  4. Model calibration and parameter estimation for environmental and water resource systems

    CERN Document Server

    Sun, Ne-Zheng

    2015-01-01

    This three-part book provides a comprehensive and systematic introduction to the development of useful models for complex systems. Part 1 covers the classical inverse problem for parameter estimation in both deterministic and statistical frameworks, Part 2 is dedicated to system identification, hyperparameter estimation, and model dimension reduction, and Part 3 considers how to collect data and construct reliable models for prediction and decision-making. For the first time, topics such as multiscale inversion, stochastic field parameterization, level set method, machine learning, global sensitivity analysis, data assimilation, model uncertainty quantification, robust design, and goal-oriented modeling, are systematically described and summarized in a single book from the perspective of model inversion, and elucidated with numerical examples from environmental and water resources modeling. Readers of this book will not only learn basic concepts and methods for simple parameter estimation, but also get famili...

  5. Revised models and genetic parameter estimates for production and ...

    African Journals Online (AJOL)

    Genetic parameters for production and reproduction traits in the Elsenburg Dormer sheep stud were estimated using records of 11743 lambs born between 1943 and 2002. An animal model with direct and maternal additive, maternal permanent and temporary environmental effects was fitted for traits considered traits of the ...

  6. Kinetic parameter estimation model for anaerobic co-digestion of waste activated sludge and microalgae.

    Science.gov (United States)

    Lee, Eunyoung; Cumberbatch, Jewel; Wang, Meng; Zhang, Qiong

    2017-03-01

    Anaerobic co-digestion has a potential to improve biogas production, but limited kinetic information is available for co-digestion. This study introduced regression-based models to estimate the kinetic parameters for the co-digestion of microalgae and Waste Activated Sludge (WAS). The models were developed using the ratios of co-substrates and the kinetic parameters for the single substrate as indicators. The models were applied to the modified first-order kinetics and Monod model to determine the rate of hydrolysis and methanogenesis for the co-digestion. The results showed that the model using a hyperbola function was better for the estimation of the first-order kinetic coefficients, while the model using inverse tangent function closely estimated the Monod kinetic parameters. The models can be used for estimating kinetic parameters for not only microalgae-WAS co-digestion but also other substrates' co-digestion such as microalgae-swine manure and WAS-aquatic plants. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Robust and efficient parameter estimation in dynamic models of biological systems.

    Science.gov (United States)

    Gábor, Attila; Banga, Julio R

    2015-10-29

    Dynamic modelling provides a systematic framework to understand function in biological systems. Parameter estimation in nonlinear dynamic models remains a very challenging inverse problem due to its nonconvexity and ill-conditioning. Associated issues like overfitting and local solutions are usually not properly addressed in the systems biology literature despite their importance. Here we present a method for robust and efficient parameter estimation which uses two main strategies to surmount the aforementioned difficulties: (i) efficient global optimization to deal with nonconvexity, and (ii) proper regularization methods to handle ill-conditioning. In the case of regularization, we present a detailed critical comparison of methods and guidelines for properly tuning them. Further, we show how regularized estimations ensure the best trade-offs between bias and variance, reducing overfitting, and allowing the incorporation of prior knowledge in a systematic way. We illustrate the performance of the presented method with seven case studies of different nature and increasing complexity, considering several scenarios of data availability, measurement noise and prior knowledge. We show how our method ensures improved estimations with faster and more stable convergence. We also show how the calibrated models are more generalizable. Finally, we give a set of simple guidelines to apply this strategy to a wide variety of calibration problems. Here we provide a parameter estimation strategy which combines efficient global optimization with a regularization scheme. This method is able to calibrate dynamic models in an efficient and robust way, effectively fighting overfitting and allowing the incorporation of prior information.

  8. A Novel Methodology to Estimate Single-Tree Biophysical Parameters from 3D Digital Imagery Compared to Aerial Laser Scanner Data

    Directory of Open Access Journals (Sweden)

    Rocío Hernández-Clemente

    2014-11-01

    Full Text Available Airborne laser scanner (ALS data provide an enhanced capability to remotely map two key variables in forestry: leaf area index (LAI and tree height (H. Nevertheless, the cost, complexity and accessibility of this technology are not yet suited for meeting the broad demands required for estimating and frequently updating forest data. Here we demonstrate the capability of alternative solutions based on the use of low-cost color infrared (CIR cameras to estimate tree-level parameters, providing a cost-effective solution for forest inventories. ALS data were acquired with a Leica ALS60 laser scanner and digital aerial imagery (DAI was acquired with a consumer-grade camera modified for color infrared detection and synchronized with a GPS unit. In this paper we evaluate the generation of a DAI-based canopy height model (CHM from imagery obtained with low-cost CIR cameras using structure from motion (SfM and spatial interpolation methods in the context of a complex canopy, as in forestry. Metrics were calculated from the DAI-based CHM and the DAI-based Normalized Difference Vegetation Index (NDVI for the estimation of tree height and LAI, respectively. Results were compared with the models estimated from ALS point cloud metrics. Field measurements of tree height and effective leaf area index (LAIe were acquired from a total of 200 and 26 trees, respectively. Comparable accuracies were obtained in the tree height and LAI estimations using ALS and DAI data independently. Tree height estimated from DAI-based metrics (Percentile 90 (P90 and minimum height (MinH yielded a coefficient of determination (R2 = 0.71 and a root mean square error (RMSE = 0.71 m while models derived from ALS-based metrics (P90 yielded an R2 = 0.80 and an RMSE = 0.55 m. The estimation of LAI from DAI-based NDVI using Percentile 99 (P99 yielded an R2 = 0.62 and an RMSE = 0.17 m2/m−2. A comparative analysis of LAI estimation using ALS-based metrics (laser penetration index

  9. Time-course window estimator for ordinary differential equations linear in the parameters

    NARCIS (Netherlands)

    Vujacic, Ivan; Dattner, Itai; Gonzalez, Javier; Wit, Ernst

    In many applications obtaining ordinary differential equation descriptions of dynamic processes is scientifically important. In both, Bayesian and likelihood approaches for estimating parameters of ordinary differential equations, the speed and the convergence of the estimation procedure may

  10. Parameter estimations in predictive microbiology: Statistically sound modelling of the microbial growth rate.

    Science.gov (United States)

    Akkermans, Simen; Logist, Filip; Van Impe, Jan F

    2018-04-01

    When building models to describe the effect of environmental conditions on the microbial growth rate, parameter estimations can be performed either with a one-step method, i.e., directly on the cell density measurements, or in a two-step method, i.e., via the estimated growth rates. The two-step method is often preferred due to its simplicity. The current research demonstrates that the two-step method is, however, only valid if the correct data transformation is applied and a strict experimental protocol is followed for all experiments. Based on a simulation study and a mathematical derivation, it was demonstrated that the logarithm of the growth rate should be used as a variance stabilizing transformation. Moreover, the one-step method leads to a more accurate estimation of the model parameters and a better approximation of the confidence intervals on the estimated parameters. Therefore, the one-step method is preferred and the two-step method should be avoided. Copyright © 2017. Published by Elsevier Ltd.

  11. Maximum likelihood estimation of biophysical parameters of synaptic receptors from macroscopic currents

    Directory of Open Access Journals (Sweden)

    Andrey eStepanyuk

    2014-10-01

    Full Text Available Dendritic integration and neuronal firing patterns strongly depend on biophysical properties of synaptic ligand-gated channels. However, precise estimation of biophysical parameters of these channels in their intrinsic environment is complicated and still unresolved problem. Here we describe a novel method based on a maximum likelihood approach that allows to estimate not only the unitary current of synaptic receptor channels but also their multiple conductance levels, kinetic constants, the number of receptors bound with a neurotransmitter and the peak open probability from experimentally feasible number of postsynaptic currents. The new method also improves the accuracy of evaluation of unitary current as compared to the peak-scaled non-stationary fluctuation analysis, leading to a possibility to precisely estimate this important parameter from a few postsynaptic currents recorded in steady-state conditions. Estimation of unitary current with this method is robust even if postsynaptic currents are generated by receptors having different kinetic parameters, the case when peak-scaled non-stationary fluctuation analysis is not applicable. Thus, with the new method, routinely recorded postsynaptic currents could be used to study the properties of synaptic receptors in their native biochemical environment.

  12. Contributions to a methodology for periodical verification of the parameters of the control systems at Cernavoda Nuclear plant Unit 1

    International Nuclear Information System (INIS)

    Tapu, Cornel; Anescu, George

    1998-01-01

    A model identification methodology for periodical verification of the regulating system parameters at Cernavoda NPP Unit 1 was developed. As support to this methodology, the computer program MODELIDENT was implemented in the Java programming language. This program is used for off-line evaluation of the real regulating systems characteristic parameters using an identification algorithm which takes as input data the system response collected for different input excitation signals, a structurally similar model of the analyzed regulating system, and some starting guess value of the unknown parameters. The real values of the parameters are determined during MODELIDENT program execution by applying an iterative algorithm and afterwards are retained as nominal reference values. The success of the identification algorithm is strongly dependent on how appropriately the structure of model's transfer function is chosen. By repeating periodically the identification method, using newly collected data from the process, the current value of the parameters are determined. Any deviations of the new values relative to the nominal reference values are interpreted as de-calibration of the control equipment and in this case corrective maintenance actions have to be taken. With the implementation of the presented methodology at Cernavoda NPP Unit 1 we can make the statement that the preventive maintenance activity is gaining a predictive feature, which can lead to the elimination of major degradation possibilities in the performances of the RS equipment and consequently to increase the NPP availability. On the basis of the experience gained in the practical application of the presented methodology we expect that the identification method will also have beneficial effects in the optimal control of the process systems and also in the activity of Full Scope Simulator software maintenance (the reference values of the identified parameters being used for fine tuning of the simulation models

  13. Statistical estimation of nuclear reactor dynamic parameters

    International Nuclear Information System (INIS)

    Cummins, J.D.

    1962-02-01

    This report discusses the study of the noise in nuclear reactors and associated power plant. The report is divided into three distinct parts. In the first part parameters which influence the dynamic behaviour of some reactors will be specified and their effect on dynamic performance described. Methods of estimating dynamic parameters using statistical signals will be described in detail together with descriptions of the usefulness of the results, the accuracy and related topics. Some experiments which have been and which might be performed on nuclear reactors will be described. In the second part of the report a digital computer programme will be described. The computer programme derives the correlation functions and the spectra of signals. The programme will compute the frequency response both gain and phase for physical items of plant for which simultaneous recordings of input and output signal variations have been made. Estimations of the accuracy of the correlation functions and the spectra may be computed using the programme and the amplitude distribution of signals may also b computed. The programme is written in autocode for the Ferranti Mercury computer. In the third part of the report a practical example of the use of the method and the digital programme is presented. In order to eliminate difficulties of interpretation a very simple plant model was chosen i.e. a simple first order lag. Several interesting properties of statistical signals were measured and will be discussed. (author)

  14. Four-dimensional parameter estimation of plane waves using swarming intelligence

    International Nuclear Information System (INIS)

    Zaman Fawad; Munir Fahad; Khan Zafar Ullah; Qureshi Ijaz Mansoor

    2014-01-01

    This paper proposes an efficient approach for four-dimensional (4D) parameter estimation of plane waves impinging on a 2-L shape array. The 4D parameters include amplitude, frequency and the two-dimensional (2D) direction of arrival, namely, azimuth and elevation angles. The proposed approach is based on memetic computation, in which the global optimizer, particle swarm optimization is hybridized with a rapid local search technique, pattern search. For this purpose, a new multi-objective fitness function is used. This fitness function is the combination of mean square error and the correlation between the normalized desired and estimated vectors. The proposed hybrid scheme is not only compared with individual performances of particle swarm optimization and pattern search, but also with the performance of the hybrid genetic algorithm and that of the traditional approach. A large number of Monte—Carlo simulations are carried out to validate the performance of the proposed scheme. It gives promising results in terms of estimation accuracy, convergence rate, proximity effect and robustness against noise. (interdisciplinary physics and related areas of science and technology)

  15. Estimation of economic parameters of U.S. hydropower resources

    Energy Technology Data Exchange (ETDEWEB)

    Hall, Douglas G. [Idaho National Lab. (INL), Idaho Falls, ID (United States). Idaho National Engineering and Environmental Lab. (INEEL); Hunt, Richard T. [Idaho National Lab. (INL), Idaho Falls, ID (United States). Idaho National Engineering and Environmental Lab. (INEEL); Reeves, Kelly S. [Idaho National Lab. (INL), Idaho Falls, ID (United States). Idaho National Engineering and Environmental Lab. (INEEL); Carroll, Greg R. [Idaho National Lab. (INL), Idaho Falls, ID (United States). Idaho National Engineering and Environmental Lab. (INEEL)

    2003-06-01

    Tools for estimating the cost of developing and operating and maintaining hydropower resources in the form of regression curves were developed based on historical plant data. Development costs that were addressed included: licensing, construction, and five types of environmental mitigation. It was found that the data for each type of cost correlated well with plant capacity. A tool for estimating the annual and monthly electric generation of hydropower resources was also developed. Additional tools were developed to estimate the cost of upgrading a turbine or a generator. The development and operation and maintenance cost estimating tools, and the generation estimating tool were applied to 2,155 U.S. hydropower sites representing a total potential capacity of 43,036 MW. The sites included totally undeveloped sites, dams without a hydroelectric plant, and hydroelectric plants that could be expanded to achieve greater capacity. Site characteristics and estimated costs and generation for each site were assembled in a database in Excel format that is also included within the EERE Library under the title, “Estimation of Economic Parameters of U.S. Hydropower Resources - INL Hydropower Resource Economics Database.”

  16. Regularization parameter selection methods for ill-posed Poisson maximum likelihood estimation

    International Nuclear Information System (INIS)

    Bardsley, Johnathan M; Goldes, John

    2009-01-01

    In image processing applications, image intensity is often measured via the counting of incident photons emitted by the object of interest. In such cases, image data noise is accurately modeled by a Poisson distribution. This motivates the use of Poisson maximum likelihood estimation for image reconstruction. However, when the underlying model equation is ill-posed, regularization is needed. Regularized Poisson likelihood estimation has been studied extensively by the authors, though a problem of high importance remains: the choice of the regularization parameter. We will present three statistically motivated methods for choosing the regularization parameter, and numerical examples will be presented to illustrate their effectiveness

  17. A coherent structure approach for parameter estimation in Lagrangian Data Assimilation

    Science.gov (United States)

    Maclean, John; Santitissadeekorn, Naratip; Jones, Christopher K. R. T.

    2017-12-01

    We introduce a data assimilation method to estimate model parameters with observations of passive tracers by directly assimilating Lagrangian Coherent Structures. Our approach differs from the usual Lagrangian Data Assimilation approach, where parameters are estimated based on tracer trajectories. We employ the Approximate Bayesian Computation (ABC) framework to avoid computing the likelihood function of the coherent structure, which is usually unavailable. We solve the ABC by a Sequential Monte Carlo (SMC) method, and use Principal Component Analysis (PCA) to identify the coherent patterns from tracer trajectory data. Our new method shows remarkably improved results compared to the bootstrap particle filter when the physical model exhibits chaotic advection.

  18. A Note on Parameter Estimation in the Composite Weibull–Pareto Distribution

    Directory of Open Access Journals (Sweden)

    Enrique Calderín-Ojeda

    2018-02-01

    Full Text Available Composite models have received much attention in the recent actuarial literature to describe heavy-tailed insurance loss data. One of the models that presents a good performance to describe this kind of data is the composite Weibull–Pareto (CWL distribution. On this note, this distribution is revisited to carry out estimation of parameters via mle and mle2 optimization functions in R. The results are compared with those obtained in a previous paper by using the nlm function, in terms of analytical and graphical methods of model selection. In addition, the consistency of the parameter estimation is examined via a simulation study.

  19. Parameter Estimates in Differential Equation Models for Population Growth

    Science.gov (United States)

    Winkel, Brian J.

    2011-01-01

    We estimate the parameters present in several differential equation models of population growth, specifically logistic growth models and two-species competition models. We discuss student-evolved strategies and offer "Mathematica" code for a gradient search approach. We use historical (1930s) data from microbial studies of the Russian biologist,…

  20. estimation of shear strength parameters of lateritic soils using

    African Journals Online (AJOL)

    user

    ... a tool to estimate the. Nigerian Journal of Technology (NIJOTECH). Vol. ... modeling tools for the prediction of shear strength parameters for lateritic ... 2.2 Geotechnical Analysis of the Soils ... The back propagation learning algorithm is the most popular and ..... [10] Alsaleh, M. I., Numerical modeling for strain localization in ...