WorldWideScience

Sample records for model hbm-derived estimates

  1. Continuous Time Model Estimation

    OpenAIRE

    Carl Chiarella; Shenhuai Gao

    2004-01-01

    This paper introduces an easy to follow method for continuous time model estimation. It serves as an introduction on how to convert a state space model from continuous time to discrete time, how to decompose a hybrid stochastic model into a trend model plus a noise model, how to estimate the trend model by simulation, and how to calculate standard errors from estimation of the noise model. It also discusses the numerical difficulties involved in discrete time models that bring about the unit ...

  2. Estimating nonlinear models

    Science.gov (United States)

    Billings, S. A.

    1988-03-01

    Time and frequency domain identification methods for nonlinear systems are reviewed. Parametric methods, prediction error methods, structure detection, model validation, and experiment design are discussed. Identification of a liquid level system, a heat exchanger, and a turbocharge automotive diesel engine are illustrated. Rational models are introduced. Spectral analysis for nonlinear systems is treated. Recursive estimation is mentioned.

  3. Methods of statistical model estimation

    CERN Document Server

    Hilbe, Joseph

    2013-01-01

    Methods of Statistical Model Estimation examines the most important and popular methods used to estimate parameters for statistical models and provide informative model summary statistics. Designed for R users, the book is also ideal for anyone wanting to better understand the algorithms used for statistical model fitting. The text presents algorithms for the estimation of a variety of regression procedures using maximum likelihood estimation, iteratively reweighted least squares regression, the EM algorithm, and MCMC sampling. Fully developed, working R code is constructed for each method. Th

  4. Estimating Functions and Semiparametric Models

    DEFF Research Database (Denmark)

    Labouriau, Rodrigo

    1996-01-01

    The thesis is divided in two parts. The first part treats some topics of the estimation theory for semiparametric models in general. There the classic optimality theory is reviewed and exposed in a suitable way for the further developments given after. Further the theory of estimating functions...... contained in this part of the thesis constitutes an original contribution. There can be found the detailed characterization of the class of regular estimating functions, a calculation of efficient regular asymptotic linear estimating sequences (\\ie the classical optimality theory) and a discussion...... of the attainability of the bounds for the concentration of regular asymptotic linear estimating sequences by estimators derived from estimating functions. The main class of models considered in the second part of the thesis (chapter 5) are constructed by assuming that the expectation of a number of given square...

  5. Mode choice model parameters estimation

    OpenAIRE

    Strnad, Irena

    2010-01-01

    The present work focuses on parameter estimation of two mode choice models: multinomial logit and EVA 2 model, where four different modes and five different trip purposes are taken into account. Mode choice model discusses the behavioral aspect of mode choice making and enables its application to a traffic model. Mode choice model includes mode choice affecting trip factors by using each mode and their relative importance to choice made. When trip factor values are known, it...

  6. Algebraic Lens Distortion Model Estimation

    Directory of Open Access Journals (Sweden)

    Luis Alvarez

    2010-07-01

    Full Text Available A very important property of the usual pinhole model for camera projection is that 3D lines in the scene are projected to 2D lines. Unfortunately, wide-angle lenses (specially low-cost lenses may introduce a strong barrel distortion, which makes the usual pinhole model fail. Lens distortion models try to correct such distortion. We propose an algebraic approach to the estimation of the lens distortion parameters based on the rectification of lines in the image. Using the proposed method, the lens distortion parameters are obtained by minimizing a 4 total-degree polynomial in several variables. We perform numerical experiments using calibration patterns and real scenes to show the performance of the proposed method.

  7. FREQUENTIST MODEL AVERAGING ESTIMATION: A REVIEW

    Institute of Scientific and Technical Information of China (English)

    Haiying WANG; Xinyu ZHANG; Guohua ZOU

    2009-01-01

    In applications, the traditional estimation procedure generally begins with model selection.Once a specific model is selected, subsequent estimation is conducted under the selected model without consideration of the uncertainty from the selection process. This often leads to the underreporting of variability and too optimistic confidence sets. Model averaging estimation is an alternative to this procedure, which incorporates model uncertainty into the estimation process. In recent years, there has been a rising interest in model averaging from the frequentist perspective, and some important progresses have been made. In this paper, the theory and methods on frequentist model averaging estimation are surveyed. Some future research topics are also discussed.

  8. Estimating Stochastic Volatility Models using Prediction-based Estimating Functions

    DEFF Research Database (Denmark)

    Lunde, Asger; Brix, Anne Floor

    In this paper prediction-based estimating functions (PBEFs), introduced in Sørensen (2000), are reviewed and PBEFs for the Heston (1993) stochastic volatility model are derived. The finite sample performance of the PBEF based estimator is investigated in a Monte Carlo study, and compared to the p......In this paper prediction-based estimating functions (PBEFs), introduced in Sørensen (2000), are reviewed and PBEFs for the Heston (1993) stochastic volatility model are derived. The finite sample performance of the PBEF based estimator is investigated in a Monte Carlo study, and compared...... to the performance of the GMM estimator based on conditional moments of integrated volatility from Bollerslev and Zhou (2002). The case where the observed log-price process is contaminated by i.i.d. market microstructure (MMS) noise is also investigated. First, the impact of MMS noise on the parameter estimates from...

  9. Kalman filter estimation model in flood forecasting

    Science.gov (United States)

    Husain, Tahir

    Elementary precipitation and runoff estimation problems associated with hydrologic data collection networks are formulated in conjunction with the Kalman Filter Estimation Model. Examples involve the estimation of runoff using data from a single precipitation station and also from a number of precipitation stations. The formulations demonstrate the role of state-space, measurement, and estimation equations of the Kalman Filter Model in flood forecasting. To facilitate the formulation, the unit hydrograph concept and antecedent precipitation index is adopted in the estimation model. The methodology is then applied to estimate various flood events in the Carnation Creek of British Columbia.

  10. Discrete Choice Models - Estimation of Passenger Traffic

    DEFF Research Database (Denmark)

    Sørensen, Majken Vildrik

    2003-01-01

    ), which simultaneously finds optimal coefficients values (utility elements) and parameter values (distributed terms) in the utility function. The shape of the distributed terms is specified prior to the estimation; hence, the validity is not tested during the estimation. The proposed method, assesses...... for data, a literature review follows. Models applied for estimation of discrete choice models are described by properties and limitations, and relations between these are established. Model types are grouped into three classes, Hybrid choice models, Tree models and Latent class models. Relations between...... the shape of the distribution from data, by means of repetitive model estimation. In particular, one model was estimated for each sub-sample of data. The shape of distributions is assessed from between model comparisons. This is not to be regarded as an alternative to MSL estimation, rather...

  11. Outlier Rejecting Multirate Model for State Estimation

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Wavelet transform was introduced to detect and eliminate outliers in time-frequency domain. The outlier rejection and multirate information extraction were initially incorporated by wavelet transform, a new outlier rejecting multirate model for state estimation was proposed. The model is applied to state estimation with interacting multiple model, as the outlier is eliminated and more reasonable multirate information is extracted, the estimation accuracy is greatly enhanced. The simulation results prove that the new model is robust to outliers and the estimation performance is significantly improved.

  12. Efficient Estimation in Heteroscedastic Varying Coefficient Models

    Directory of Open Access Journals (Sweden)

    Chuanhua Wei

    2015-07-01

    Full Text Available This paper considers statistical inference for the heteroscedastic varying coefficient model. We propose an efficient estimator for coefficient functions that is more efficient than the conventional local-linear estimator. We establish asymptotic normality for the proposed estimator and conduct some simulation to illustrate the performance of the proposed method.

  13. Estimating Canopy Dark Respiration for Crop Models

    Science.gov (United States)

    Monje Mejia, Oscar Alberto

    2014-01-01

    Crop production is obtained from accurate estimates of daily carbon gain.Canopy gross photosynthesis (Pgross) can be estimated from biochemical models of photosynthesis using sun and shaded leaf portions and the amount of intercepted photosyntheticallyactive radiation (PAR).In turn, canopy daily net carbon gain can be estimated from canopy daily gross photosynthesis when canopy dark respiration (Rd) is known.

  14. PARAMETER ESTIMATION OF ENGINEERING TURBULENCE MODEL

    Institute of Scientific and Technical Information of China (English)

    钱炜祺; 蔡金狮

    2001-01-01

    A parameter estimation algorithm is introduced and used to determine the parameters in the standard k-ε two equation turbulence model (SKE). It can be found from the estimation results that although the parameter estimation method is an effective method to determine model parameters, it is difficult to obtain a set of parameters for SKE to suit all kinds of separated flow and a modification of the turbulence model structure should be considered. So, a new nonlinear k-ε two-equation model (NNKE) is put forward in this paper and the corresponding parameter estimation technique is applied to determine the model parameters. By implementing the NNKE to solve some engineering turbulent flows, it is shown that NNKE is more accurate and versatile than SKE. Thus, the success of NNKE implies that the parameter estimation technique may have a bright prospect in engineering turbulence model research.

  15. Analysis of Empirical Software Effort Estimation Models

    CERN Document Server

    Basha, Saleem

    2010-01-01

    Reliable effort estimation remains an ongoing challenge to software engineers. Accurate effort estimation is the state of art of software engineering, effort estimation of software is the preliminary phase between the client and the business enterprise. The relationship between the client and the business enterprise begins with the estimation of the software. The credibility of the client to the business enterprise increases with the accurate estimation. Effort estimation often requires generalizing from a small number of historical projects. Generalization from such limited experience is an inherently under constrained problem. Accurate estimation is a complex process because it can be visualized as software effort prediction, as the term indicates prediction never becomes an actual. This work follows the basics of the empirical software effort estimation models. The goal of this paper is to study the empirical software effort estimation. The primary conclusion is that no single technique is best for all sit...

  16. On parameter estimation in deformable models

    DEFF Research Database (Denmark)

    Fisker, Rune; Carstensen, Jens Michael

    1998-01-01

    Deformable templates have been intensively studied in image analysis through the last decade, but despite its significance the estimation of model parameters has received little attention. We present a method for supervised and unsupervised model parameter estimation using a general Bayesian...... method is based on a modified version of the EM algorithm. Experimental results for a deformable template used for textile inspection are presented...

  17. Parameter Estimation, Model Reduction and Quantum Filtering

    CERN Document Server

    Chase, Bradley A

    2009-01-01

    This dissertation explores the topics of parameter estimation and model reduction in the context of quantum filtering. Chapters 2 and 3 provide a review of classical and quantum probability theory, stochastic calculus and filtering. Chapter 4 studies the problem of quantum parameter estimation and introduces the quantum particle filter as a practical computational method for parameter estimation via continuous measurement. Chapter 5 applies these techniques in magnetometry and studies the estimator's uncertainty scalings in a double-pass atomic magnetometer. Chapter 6 presents an efficient feedback controller for continuous-time quantum error correction. Chapter 7 presents an exact model of symmetric processes of collective qubit systems.

  18. Mineral resources estimation based on block modeling

    Science.gov (United States)

    Bargawa, Waterman Sulistyana; Amri, Nur Ali

    2016-02-01

    The estimation in this paper uses three kinds of block models of nearest neighbor polygon, inverse distance squared and ordinary kriging. The techniques are weighting scheme which is based on the principle that block content is a linear combination of the grade data or the sample around the block being estimated. The case study in Pongkor area, here is gold-silver resource modeling that allegedly shaped of quartz vein as a hydrothermal process of epithermal type. Resources modeling includes of data entry, statistical and variography analysis of topography and geological model, the block model construction, estimation parameter, presentation model and tabulation of mineral resources. Skewed distribution, here isolated by robust semivariogram. The mineral resources classification generated in this model based on an analysis of the kriging standard deviation and number of samples which are used in the estimation of each block. Research results are used to evaluate the performance of OK and IDS estimator. Based on the visual and statistical analysis, concluded that the model of OK gives the estimation closer to the data used for modeling.

  19. Amplitude Models for Discrimination and Yield Estimation

    Energy Technology Data Exchange (ETDEWEB)

    Phillips, William Scott [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-09-01

    This seminar presentation describes amplitude models and yield estimations that look at the data in order to inform legislation. The following points were brought forth in the summary: global models that will predict three-component amplitudes (R-T-Z) were produced; Q models match regional geology; corrected source spectra can be used for discrimination and yield estimation; three-component data increase coverage and reduce scatter in source spectral estimates; three-component efforts must include distance-dependent effects; a community effort on instrument calibration is needed.

  20. Estimation of Wind Turbulence Using Spectral Models

    DEFF Research Database (Denmark)

    Soltani, Mohsen; Knudsen, Torben; Bak, Thomas

    2011-01-01

    The production and loading of wind farms are significantly influenced by the turbulence of the flowing wind field. Estimation of turbulence allows us to optimize the performance of the wind farm. Turbulence estimation is; however, highly challenging due to the chaotic behavior of the wind....... In this paper, a method is presented for estimation of the turbulence. The spectral model of the wind is used in order to provide the estimations. The suggested estimation approach is applied to a case study in which the objective is to estimate wind turbulence at desired points using the measurements of wind...... speed outside the wind field. The results show that the method is able to provide estimations which explain more than 50% of the wind turbulence from the distance of about 300 meters....

  1. Bayesian estimation of the network autocorrelation model

    NARCIS (Netherlands)

    Dittrich, D.; Leenders, R.T.A.J.; Mulder, J.

    2017-01-01

    The network autocorrelation model has been extensively used by researchers interested modeling social influence effects in social networks. The most common inferential method in the model is classical maximum likelihood estimation. This approach, however, has known problems such as negative bias of

  2. Parameter and Uncertainty Estimation in Groundwater Modelling

    DEFF Research Database (Denmark)

    Jensen, Jacob Birk

    The data basis on which groundwater models are constructed is in general very incomplete, and this leads to uncertainty in model outcome. Groundwater models form the basis for many, often costly decisions and if these are to be made on solid grounds, the uncertainty attached to model results must...... be quantified. This study was motivated by the need to estimate the uncertainty involved in groundwater models.Chapter 2 presents an integrated surface/subsurface unstructured finite difference model that was developed and applied to a synthetic case study.The following two chapters concern calibration...... and uncertainty estimation. Essential issues relating to calibration are discussed. The classical regression methods are described; however, the main focus is on the Generalized Likelihood Uncertainty Estimation (GLUE) methodology. The next two chapters describe case studies in which the GLUE methodology...

  3. INTEGRATED SPEED ESTIMATION MODEL FOR MULTILANE EXPREESSWAYS

    Science.gov (United States)

    Hong, Sungjoon; Oguchi, Takashi

    In this paper, an integrated speed-estimation model is developed based on empirical analyses for the basic sections of intercity multilane expressway un der the uncongested condition. This model enables a speed estimation for each lane at any site under arb itrary highway-alignment, traffic (traffic flow and truck percentage), and rainfall conditions. By combin ing this model and a lane-use model which estimates traffic distribution on the lanes by each vehicle type, it is also possible to es timate an average speed across all the lanes of one direction from a traffic demand by vehicle type under specific highway-alignment and rainfall conditions. This model is exp ected to be a tool for the evaluation of traffic performance for expressways when the performance me asure is travel speed, which is necessary for Performance-Oriented Highway Planning and Design. Regarding the highway-alignment condition, two new estimators, called effective horizo ntal curvature and effective vertical grade, are proposed in this paper which take into account the influence of upstream and downstream alignment conditions. They are applied to the speed-estimation model, and it shows increased accuracy of the estimation.

  4. Model error estimation in ensemble data assimilation

    Directory of Open Access Journals (Sweden)

    S. Gillijns

    2007-01-01

    Full Text Available A new methodology is proposed to estimate and account for systematic model error in linear filtering as well as in nonlinear ensemble based filtering. Our results extend the work of Dee and Todling (2000 on constant bias errors to time-varying model errors. In contrast to existing methodologies, the new filter can also deal with the case where no dynamical model for the systematic error is available. In the latter case, the applicability is limited by a matrix rank condition which has to be satisfied in order for the filter to exist. The performance of the filter developed in this paper is limited by the availability and the accuracy of observations and by the variance of the stochastic model error component. The effect of these aspects on the estimation accuracy is investigated in several numerical experiments using the Lorenz (1996 model. Experimental results indicate that the availability of a dynamical model for the systematic error significantly reduces the variance of the model error estimates, but has only minor effect on the estimates of the system state. The filter is able to estimate additive model error of any type, provided that the rank condition is satisfied and that the stochastic errors and measurement errors are significantly smaller than the systematic errors. The results of this study are encouraging. However, it remains to be seen how the filter performs in more realistic applications.

  5. Regional fuzzy chain model for evapotranspiration estimation

    Science.gov (United States)

    Güçlü, Yavuz Selim; Subyani, Ali M.; Şen, Zekai

    2017-01-01

    Evapotranspiration (ET) is one of the main hydrological cycle components that has extreme importance for water resources management and agriculture especially in arid and semi-arid regions. In this study, regional ET estimation models based on the fuzzy logic (FL) principles are suggested, where the first stage includes the ET calculation via Penman-Monteith equation, which produces reliable results. In the second phase, ET estimations are produced according to the conventional FL inference system model. In this paper, regional fuzzy model (RFM) and regional fuzzy chain model (RFCM) are proposed through the use of adjacent stations' data in order to fill the missing ones. The application of the two models produces reliable and satisfactory results for mountainous and sea region locations in the Kingdom of Saudi Arabia, but comparatively RFCM estimations have more accuracy. In general, the mean absolute percentage error is less than 10%, which is acceptable in practical applications.

  6. Parameter Estimation of Partial Differential Equation Models

    KAUST Repository

    Xun, Xiaolei

    2013-09-01

    Partial differential equation (PDE) models are commonly used to model complex dynamic systems in applied sciences such as biology and finance. The forms of these PDE models are usually proposed by experts based on their prior knowledge and understanding of the dynamic system. Parameters in PDE models often have interesting scientific interpretations, but their values are often unknown and need to be estimated from the measurements of the dynamic system in the presence of measurement errors. Most PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus the computational load is high. In this article, we propose two methods to estimate parameters in PDE models: a parameter cascading method and a Bayesian approach. In both methods, the underlying dynamic process modeled with the PDE model is represented via basis function expansion. For the parameter cascading method, we develop two nested levels of optimization to estimate the PDE parameters. For the Bayesian method, we develop a joint model for data and the PDE and develop a novel hierarchical model allowing us to employ Markov chain Monte Carlo (MCMC) techniques to make posterior inference. Simulation studies show that the Bayesian method and parameter cascading method are comparable, and both outperform other available methods in terms of estimation accuracy. The two methods are demonstrated by estimating parameters in a PDE model from long-range infrared light detection and ranging data. Supplementary materials for this article are available online. © 2013 American Statistical Association.

  7. Conditional shape models for cardiac motion estimation

    DEFF Research Database (Denmark)

    Metz, Coert; Baka, Nora; Kirisli, Hortense

    2010-01-01

    We propose a conditional statistical shape model to predict patient specific cardiac motion from the 3D end-diastolic CTA scan. The model is built from 4D CTA sequences by combining atlas based segmentation and 4D registration. Cardiac motion estimation is, for example, relevant in the dynamic...

  8. Statistical Model-Based Face Pose Estimation

    Institute of Scientific and Technical Information of China (English)

    GE Xinliang; YANG Jie; LI Feng; WANG Huahua

    2007-01-01

    A robust face pose estimation approach is proposed by using face shape statistical model approach and pose parameters are represented by trigonometric functions. The face shape statistical model is firstly built by analyzing the face shapes from different people under varying poses. The shape alignment is vital in the process of building the statistical model. Then, six trigonometric functions are employed to represent the face pose parameters. Lastly, the mapping function is constructed between face image and face pose by linearly relating different parameters. The proposed approach is able to estimate different face poses using a few face training samples. Experimental results are provided to demonstrate its efficiency and accuracy.

  9. Hydrograph estimation with fuzzy chain model

    Science.gov (United States)

    Güçlü, Yavuz Selim; Şen, Zekai

    2016-07-01

    Hydrograph peak discharge estimation is gaining more significance with unprecedented urbanization developments. Most of the existing models do not yield reliable peak discharge estimations for small basins although they provide acceptable results for medium and large ones. In this study, fuzzy chain model (FCM) is suggested by considering the necessary adjustments based on some measurements over a small basin, Ayamama basin, within Istanbul City, Turkey. FCM is based on Mamdani and the Adaptive Neuro Fuzzy Inference Systems (ANFIS) methodologies, which yield peak discharge estimation. The suggested model is compared with two well-known approaches, namely, Soil Conservation Service (SCS)-Snyder and SCS-Clark methodologies. In all the methods, the hydrographs are obtained through the use of dimensionless unit hydrograph concept. After the necessary modeling, computation, verification and adaptation stages comparatively better hydrographs are obtained by FCM. The mean square error for the FCM is many folds smaller than the other methodologies, which proves outperformance of the suggested methodology.

  10. Bayesian mixture models for spectral density estimation

    OpenAIRE

    Cadonna, Annalisa

    2017-01-01

    We introduce a novel Bayesian modeling approach to spectral density estimation for multiple time series. Considering first the case of non-stationary timeseries, the log-periodogram of each series is modeled as a mixture of Gaussiandistributions with frequency-dependent weights and mean functions. The implied model for the log-spectral density is a mixture of linear mean functionswith frequency-dependent weights. The mixture weights are built throughsuccessive differences of a logit-normal di...

  11. Estimation and uncertainty of reversible Markov models

    CERN Document Server

    Trendelkamp-Schroer, Benjamin; Paul, Fabian; Noé, Frank

    2015-01-01

    Reversibility is a key concept in the theory of Markov models, simplified kinetic models for the conforma- tion dynamics of molecules. The analysis and interpretation of the transition matrix encoding the kinetic properties of the model relies heavily on the reversibility property. The estimation of a reversible transition matrix from simulation data is therefore crucial to the successful application of the previously developed theory. In this work we discuss methods for the maximum likelihood estimation of transition matrices from finite simulation data and present a new algorithm for the estimation if reversibility with respect to a given stationary vector is desired. We also develop new methods for the Bayesian posterior inference of reversible transition matrices with and without given stationary vector taking into account the need for a suitable prior distribution preserving the meta-stable features of the observed process during posterior inference.

  12. Developing Physician Migration Estimates for Workforce Models.

    Science.gov (United States)

    Holmes, George M; Fraher, Erin P

    2017-02-01

    To understand factors affecting specialty heterogeneity in physician migration. Physicians in the 2009 American Medical Association Masterfile data were matched to those in the 2013 file. Office locations were geocoded in both years to one of 293 areas of the country. Estimated utilization, calculated for each specialty, was used as the primary predictor of migration. Physician characteristics (e.g., specialty, age, sex) were obtained from the 2009 file. Area characteristics and other factors influencing physician migration (e.g., rurality, presence of teaching hospital) were obtained from various sources. We modeled physician location decisions as a two-part process: First, the physician decides whether to move. Second, conditional on moving, a conditional logit model estimates the probability a physician moved to a particular area. Separate models were estimated by specialty and whether the physician was a resident. Results differed between specialties and according to whether the physician was a resident in 2009, indicating heterogeneity in responsiveness to policies. Physician migration was higher between geographically proximate states with higher utilization for that specialty. Models can be used to estimate specialty-specific migration patterns for more accurate workforce modeling, including simulations to model the effect of policy changes. © Health Research and Educational Trust.

  13. Error estimation and adaptive chemical transport modeling

    Directory of Open Access Journals (Sweden)

    Malte Braack

    2014-09-01

    Full Text Available We present a numerical method to use several chemical transport models of increasing accuracy and complexity in an adaptive way. In largest parts of the domain, a simplified chemical model may be used, whereas in certain regions a more complex model is needed for accuracy reasons. A mathematically derived error estimator measures the modeling error and provides information where to use more accurate models. The error is measured in terms of output functionals. Therefore, one has to consider adjoint problems which carry sensitivity information. This concept is demonstrated by means of ozone formation and pollution emission.

  14. Parameter Estimation for Thurstone Choice Models

    Energy Technology Data Exchange (ETDEWEB)

    Vojnovic, Milan [London School of Economics (United Kingdom); Yun, Seyoung [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-04-24

    We consider the estimation accuracy of individual strength parameters of a Thurstone choice model when each input observation consists of a choice of one item from a set of two or more items (so called top-1 lists). This model accommodates the well-known choice models such as the Luce choice model for comparison sets of two or more items and the Bradley-Terry model for pair comparisons. We provide a tight characterization of the mean squared error of the maximum likelihood parameter estimator. We also provide similar characterizations for parameter estimators defined by a rank-breaking method, which amounts to deducing one or more pair comparisons from a comparison of two or more items, assuming independence of these pair comparisons, and maximizing a likelihood function derived under these assumptions. We also consider a related binary classification problem where each individual parameter takes value from a set of two possible values and the goal is to correctly classify all items within a prescribed classification error. The results of this paper shed light on how the parameter estimation accuracy depends on given Thurstone choice model and the structure of comparison sets. In particular, we found that for unbiased input comparison sets of a given cardinality, when in expectation each comparison set of given cardinality occurs the same number of times, for a broad class of Thurstone choice models, the mean squared error decreases with the cardinality of comparison sets, but only marginally according to a diminishing returns relation. On the other hand, we found that there exist Thurstone choice models for which the mean squared error of the maximum likelihood parameter estimator can decrease much faster with the cardinality of comparison sets. We report empirical evaluation of some claims and key parameters revealed by theory using both synthetic and real-world input data from some popular sport competitions and online labor platforms.

  15. Estimating Model Evidence Using Data Assimilation

    Science.gov (United States)

    Carrassi, Alberto; Bocquet, Marc; Hannart, Alexis; Ghil, Michael

    2017-04-01

    We review the field of data assimilation (DA) from a Bayesian perspective and show that, in addition to its by now common application to state estimation, DA may be used for model selection. An important special case of the latter is the discrimination between a factual model - which corresponds, to the best of the modeller's knowledge, to the situation in the actual world in which a sequence of events has occurred-and a counterfactual model, in which a particular forcing or process might be absent or just quantitatively different from the actual world. Three different ensemble-DA methods are reviewed for this purpose: the ensemble Kalman filter (EnKF), the ensemble four-dimensional variational smoother (En-4D-Var), and the iterative ensemble Kalman smoother (IEnKS). An original contextual formulation of model evidence (CME) is introduced. It is shown how to apply these three methods to compute CME, using the approximated time-dependent probability distribution functions (pdfs) each of them provide in the process of state estimation. The theoretical formulae so derived are applied to two simplified nonlinear and chaotic models: (i) the Lorenz three-variable convection model (L63), and (ii) the Lorenz 40- variable midlatitude atmospheric dynamics model (L95). The numerical results of these three DA-based methods and those of an integration based on importance sampling are compared. It is found that better CME estimates are obtained by using DA, and the IEnKS method appears to be best among the DA methods. Differences among the performance of the three DA-based methods are discussed as a function of model properties. Finally, the methodology is implemented for parameter estimation and for event attribution.

  16. Robust estimation procedure in panel data model

    Energy Technology Data Exchange (ETDEWEB)

    Shariff, Nurul Sima Mohamad [Faculty of Science of Technology, Universiti Sains Islam Malaysia (USIM), 71800, Nilai, Negeri Sembilan (Malaysia); Hamzah, Nor Aishah [Institute of Mathematical Sciences, Universiti Malaya, 50630, Kuala Lumpur (Malaysia)

    2014-06-19

    The panel data modeling has received a great attention in econometric research recently. This is due to the availability of data sources and the interest to study cross sections of individuals observed over time. However, the problems may arise in modeling the panel in the presence of cross sectional dependence and outliers. Even though there are few methods that take into consideration the presence of cross sectional dependence in the panel, the methods may provide inconsistent parameter estimates and inferences when outliers occur in the panel. As such, an alternative method that is robust to outliers and cross sectional dependence is introduced in this paper. The properties and construction of the confidence interval for the parameter estimates are also considered in this paper. The robustness of the procedure is investigated and comparisons are made to the existing method via simulation studies. Our results have shown that robust approach is able to produce an accurate and reliable parameter estimates under the condition considered.

  17. PARAMETER ESTIMATION IN BREAD BAKING MODEL

    OpenAIRE

    Hadiyanto Hadiyanto; AJB van Boxtel

    2012-01-01

    Bread product quality is highly dependent to the baking process. A model for the development of product quality, which was obtained by using quantitative and qualitative relationships, was calibrated by experiments at a fixed baking temperature of 200°C alone and in combination with 100 W microwave powers. The model parameters were estimated in a stepwise procedure i.e. first, heat and mass transfer related parameters, then the parameters related to product transformations and finally pro...

  18. Adaptive Covariance Estimation with model selection

    CERN Document Server

    Biscay, Rolando; Loubes, Jean-Michel

    2012-01-01

    We provide in this paper a fully adaptive penalized procedure to select a covariance among a collection of models observing i.i.d replications of the process at fixed observation points. For this we generalize previous results of Bigot and al. and propose to use a data driven penalty to obtain an oracle inequality for the estimator. We prove that this method is an extension to the matricial regression model of the work by Baraud.

  19. Error Estimates of Theoretical Models: a Guide

    CERN Document Server

    Dobaczewski, J; Reinhard, P -G

    2014-01-01

    This guide offers suggestions/insights on uncertainty quantification of nuclear structure models. We discuss a simple approach to statistical error estimates, strategies to assess systematic errors, and show how to uncover inter-dependencies by correlation analysis. The basic concepts are illustrated through simple examples. By providing theoretical error bars on predicted quantities and using statistical methods to study correlations between observables, theory can significantly enhance the feedback between experiment and nuclear modeling.

  20. Estimating an Activity Driven Hidden Markov Model

    OpenAIRE

    Meyer, David A.; Shakeel, Asif

    2015-01-01

    We define a Hidden Markov Model (HMM) in which each hidden state has time-dependent $\\textit{activity levels}$ that drive transitions and emissions, and show how to estimate its parameters. Our construction is motivated by the problem of inferring human mobility on sub-daily time scales from, for example, mobile phone records.

  1. Solar energy estimation using REST2 model

    Directory of Open Access Journals (Sweden)

    M. Rizwan, Majid Jamil, D. P. Kothari

    2010-03-01

    Full Text Available The network of solar energy measuring stations is relatively rare through out the world. In India, only IMD (India Meteorological Department Pune provides data for quite few stations, which is considered as the base data for research purposes. However, hourly data of measured energy is not available, even for those stations where measurement has already been done. Due to lack of hourly measured data, the estimation of solar energy at the earth’s surface is required. In the proposed study, hourly solar energy is estimated at four important Indian stations namely New Delhi, Mumbai, Pune and Jaipur keeping in mind their different climatic conditions. For this study, REST2 (Reference Evaluation of Solar Transmittance, 2 bands, a high performance parametric model for the estimation of solar energy is used. REST2 derivation uses the two-band scheme as used in the CPCR2 (Code for Physical Computation of Radiation, 2 bands but CPCR2 does not include NO2 absorption, which is an important parameter for estimating solar energy. In this study, using ground measurements during 1986-2000 as reference, a MATLAB program is written to evaluate the performance of REST2 model at four proposed stations. The solar energy at four stations throughout the year is estimated and compared with CPCR2. The results obtained from REST2 model show the good agreement against the measured data on horizontal surface. The study reveals that REST2 models performs better and evaluate the best results as compared to the other existing models under cloudless sky for Indian climatic conditions.

  2. ICA Model Order Estimation Using Clustering Method

    Directory of Open Access Journals (Sweden)

    P. Sovka

    2007-12-01

    Full Text Available In this paper a novel approach for independent component analysis (ICA model order estimation of movement electroencephalogram (EEG signals is described. The application is targeted to the brain-computer interface (BCI EEG preprocessing. The previous work has shown that it is possible to decompose EEG into movement-related and non-movement-related independent components (ICs. The selection of only movement related ICs might lead to BCI EEG classification score increasing. The real number of the independent sources in the brain is an important parameter of the preprocessing step. Previously, we used principal component analysis (PCA for estimation of the number of the independent sources. However, PCA estimates only the number of uncorrelated and not independent components ignoring the higher-order signal statistics. In this work, we use another approach - selection of highly correlated ICs from several ICA runs. The ICA model order estimation is done at significance level α = 0.05 and the model order is less or more dependent on ICA algorithm and its parameters.

  3. Efficiently adapting graphical models for selectivity estimation

    DEFF Research Database (Denmark)

    Tzoumas, Kostas; Deshpande, Amol; Jensen, Christian S.

    2013-01-01

    of the selectivities of the constituent predicates. However, this independence assumption is more often than not wrong, and is considered to be the most common cause of sub-optimal query execution plans chosen by modern query optimizers. We take a step towards a principled and practical approach to performing...... cardinality estimation without making the independence assumption. By carefully using concepts from the field of graphical models, we are able to factor the joint probability distribution over all the attributes in the database into small, usually two-dimensional distributions, without a significant loss......Query optimizers rely on statistical models that succinctly describe the underlying data. Models are used to derive cardinality estimates for intermediate relations, which in turn guide the optimizer to choose the best query execution plan. The quality of the resulting plan is highly dependent...

  4. High-dimensional model estimation and model selection

    CERN Document Server

    CERN. Geneva

    2015-01-01

    I will review concepts and algorithms from high-dimensional statistics for linear model estimation and model selection. I will particularly focus on the so-called p>>n setting where the number of variables p is much larger than the number of samples n. I will focus mostly on regularized statistical estimators that produce sparse models. Important examples include the LASSO and its matrix extension, the Graphical LASSO, and more recent non-convex methods such as the TREX. I will show the applicability of these estimators in a diverse range of scientific applications, such as sparse interaction graph recovery and high-dimensional classification and regression problems in genomics.

  5. Parameter estimation, model reduction and quantum filtering

    Science.gov (United States)

    Chase, Bradley A.

    This thesis explores the topics of parameter estimation and model reduction in the context of quantum filtering. The last is a mathematically rigorous formulation of continuous quantum measurement, in which a stream of auxiliary quantum systems is used to infer the state of a target quantum system. Fundamental quantum uncertainties appear as noise which corrupts the probe observations and therefore must be filtered in order to extract information about the target system. This is analogous to the classical filtering problem in which techniques of inference are used to process noisy observations of a system in order to estimate its state. Given the clear similarities between the two filtering problems, I devote the beginning of this thesis to a review of classical and quantum probability theory, stochastic calculus and filtering. This allows for a mathematically rigorous and technically adroit presentation of the quantum filtering problem and solution. Given this foundation, I next consider the related problem of quantum parameter estimation, in which one seeks to infer the strength of a parameter that drives the evolution of a probe quantum system. By embedding this problem in the state estimation problem solved by the quantum filter, I present the optimal Bayesian estimator for a parameter when given continuous measurements of the probe system to which it couples. For cases when the probe takes on a finite number of values, I review a set of sufficient conditions for asymptotic convergence of the estimator. For a continuous-valued parameter, I present a computational method called quantum particle filtering for practical estimation of the parameter. Using these methods, I then study the particular problem of atomic magnetometry and review an experimental method for potentially reducing the uncertainty in the estimate of the magnetic field beyond the standard quantum limit. The technique involves double-passing a probe laser field through the atomic system, giving

  6. Extreme gust wind estimation using mesoscale modeling

    DEFF Research Database (Denmark)

    Larsén, Xiaoli Guo; Kruger, Andries

    2014-01-01

    through turbulent eddies. This process is modeled using the mesoscale Weather Forecasting and Research (WRF) model. The gust at the surface is calculated as the largest winds over a layer where the averaged turbulence kinetic energy is greater than the averaged buoyancy force. The experiments have been......Currently, the existing estimation of the extreme gust wind, e.g. the 50-year winds of 3 s values, in the IEC standard, is based on a statistical model to convert the 1:50-year wind values from the 10 min resolution. This statistical model assumes a Gaussian process that satisfies the classical...... done for Denmark and two areas in South Africa. For South Africa, the extreme gust atlases from South Africa were created from the output of the mesoscale modelling using Climate Forecasting System Reanalysis (CFSR) forcing for the period 1998 – 2010. The extensive measurements including turbulence...

  7. Entropy Based Modelling for Estimating Demographic Trends.

    Directory of Open Access Journals (Sweden)

    Guoqi Li

    Full Text Available In this paper, an entropy-based method is proposed to forecast the demographical changes of countries. We formulate the estimation of future demographical profiles as a constrained optimization problem, anchored on the empirically validated assumption that the entropy of age distribution is increasing in time. The procedure of the proposed method involves three stages, namely: 1 Prediction of the age distribution of a country's population based on an "age-structured population model"; 2 Estimation the age distribution of each individual household size with an entropy-based formulation based on an "individual household size model"; and 3 Estimation the number of each household size based on a "total household size model". The last stage is achieved by projecting the age distribution of the country's population (obtained in stage 1 onto the age distributions of individual household sizes (obtained in stage 2. The effectiveness of the proposed method is demonstrated by feeding real world data, and it is general and versatile enough to be extended to other time dependent demographic variables.

  8. Model-based estimation of individual fitness

    Science.gov (United States)

    Link, W.A.; Cooch, E.G.; Cam, E.

    2002-01-01

    Fitness is the currency of natural selection, a measure of the propagation rate of genotypes into future generations. Its various definitions have the common feature that they are functions of survival and fertility rates. At the individual level, the operative level for natural selection, these rates must be understood as latent features, genetically determined propensities existing at birth. This conception of rates requires that individual fitness be defined and estimated by consideration of the individual in a modelled relation to a group of similar individuals; the only alternative is to consider a sample of size one, unless a clone of identical individuals is available. We present hierarchical models describing individual heterogeneity in survival and fertility rates and allowing for associations between these rates at the individual level. We apply these models to an analysis of life histories of Kittiwakes (Rissa tridactyla ) observed at several colonies on the Brittany coast of France. We compare Bayesian estimation of the population distribution of individual fitness with estimation based on treating individual life histories in isolation, as samples of size one (e.g. McGraw & Caswell, 1996).

  9. Hidden Markov models estimation and control

    CERN Document Server

    Elliott, Robert J; Moore, John B

    1995-01-01

    As more applications are found, interest in Hidden Markov Models continues to grow. Following comments and feedback from colleagues, students and other working with Hidden Markov Models the corrected 3rd printing of this volume contains clarifications, improvements and some new material, including results on smoothing for linear Gaussian dynamics. In Chapter 2 the derivation of the basic filters related to the Markov chain are each presented explicitly, rather than as special cases of one general filter. Furthermore, equations for smoothed estimates are given. The dynamics for the Kalman filte

  10. On Bayes linear unbiased estimation of estimable functions for the singular linear model

    Institute of Scientific and Technical Information of China (English)

    ZHANG Weiping; WEI Laisheng

    2005-01-01

    The unique Bayes linear unbiased estimator (Bayes LUE) of estimable functions is derived for the singular linear model. The superiority of Bayes LUE over ordinary best linear unbiased estimator is investigated under mean square error matrix (MSEM)criterion.

  11. Model Selection Through Sparse Maximum Likelihood Estimation

    CERN Document Server

    Banerjee, Onureena; D'Aspremont, Alexandre

    2007-01-01

    We consider the problem of estimating the parameters of a Gaussian or binary distribution in such a way that the resulting undirected graphical model is sparse. Our approach is to solve a maximum likelihood problem with an added l_1-norm penalty term. The problem as formulated is convex but the memory requirements and complexity of existing interior point methods are prohibitive for problems with more than tens of nodes. We present two new algorithms for solving problems with at least a thousand nodes in the Gaussian case. Our first algorithm uses block coordinate descent, and can be interpreted as recursive l_1-norm penalized regression. Our second algorithm, based on Nesterov's first order method, yields a complexity estimate with a better dependence on problem size than existing interior point methods. Using a log determinant relaxation of the log partition function (Wainwright & Jordan (2006)), we show that these same algorithms can be used to solve an approximate sparse maximum likelihood problem for...

  12. Bayesian Estimation of a Mixture Model

    Directory of Open Access Journals (Sweden)

    Ilhem Merah

    2015-05-01

    Full Text Available We present the properties of a bathtub curve reliability model having both a sufficient adaptability and a minimal number of parameters introduced by Idée and Pierrat (2010. This one is a mixture of a Gamma distribution G(2, (1/θ and a new distribution L(θ. We are interesting by Bayesian estimation of the parameters and survival function of this model with a squared-error loss function and non-informative prior using the approximations of Lindley (1980 and Tierney and Kadane (1986. Using a statistical sample of 60 failure data relative to a technical device, we illustrate the results derived. Based on a simulation study, comparisons are made between these two methods and the maximum likelihood method of this two parameters model.

  13. Hierarchical Boltzmann simulations and model error estimation

    Science.gov (United States)

    Torrilhon, Manuel; Sarna, Neeraj

    2017-08-01

    A hierarchical simulation approach for Boltzmann's equation should provide a single numerical framework in which a coarse representation can be used to compute gas flows as accurately and efficiently as in computational fluid dynamics, but a subsequent refinement allows to successively improve the result to the complete Boltzmann result. We use Hermite discretization, or moment equations, for the steady linearized Boltzmann equation for a proof-of-concept of such a framework. All representations of the hierarchy are rotationally invariant and the numerical method is formulated on fully unstructured triangular and quadrilateral meshes using a implicit discontinuous Galerkin formulation. We demonstrate the performance of the numerical method on model problems which in particular highlights the relevance of stability of boundary conditions on curved domains. The hierarchical nature of the method allows also to provide model error estimates by comparing subsequent representations. We present various model errors for a flow through a curved channel with obstacles.

  14. Estimation in Dirichlet random effects models

    CERN Document Server

    Kyung, Minjung; Casella, George; 10.1214/09-AOS731

    2010-01-01

    We develop a new Gibbs sampler for a linear mixed model with a Dirichlet process random effect term, which is easily extended to a generalized linear mixed model with a probit link function. Our Gibbs sampler exploits the properties of the multinomial and Dirichlet distributions, and is shown to be an improvement, in terms of operator norm and efficiency, over other commonly used MCMC algorithms. We also investigate methods for the estimation of the precision parameter of the Dirichlet process, finding that maximum likelihood may not be desirable, but a posterior mode is a reasonable approach. Examples are given to show how these models perform on real data. Our results complement both the theoretical basis of the Dirichlet process nonparametric prior and the computational work that has been done to date.

  15. A Biomechanical Modeling Guided CBCT Estimation Technique.

    Science.gov (United States)

    Zhang, You; Tehrani, Joubin Nasehi; Wang, Jing

    2017-02-01

    Two-dimensional-to-three-dimensional (2D-3D) deformation has emerged as a new technique to estimate cone-beam computed tomography (CBCT) images. The technique is based on deforming a prior high-quality 3D CT/CBCT image to form a new CBCT image, guided by limited-view 2D projections. The accuracy of this intensity-based technique, however, is often limited in low-contrast image regions with subtle intensity differences. The solved deformation vector fields (DVFs) can also be biomechanically unrealistic. To address these problems, we have developed a biomechanical modeling guided CBCT estimation technique (Bio-CBCT-est) by combining 2D-3D deformation with finite element analysis (FEA)-based biomechanical modeling of anatomical structures. Specifically, Bio-CBCT-est first extracts the 2D-3D deformation-generated displacement vectors at the high-contrast anatomical structure boundaries. The extracted surface deformation fields are subsequently used as the boundary conditions to drive structure-based FEA to correct and fine-tune the overall deformation fields, especially those at low-contrast regions within the structure. The resulting FEA-corrected deformation fields are then fed back into 2D-3D deformation to form an iterative loop, combining the benefits of intensity-based deformation and biomechanical modeling for CBCT estimation. Using eleven lung cancer patient cases, the accuracy of the Bio-CBCT-est technique has been compared to that of the 2D-3D deformation technique and the traditional CBCT reconstruction techniques. The accuracy was evaluated in the image domain, and also in the DVF domain through clinician-tracked lung landmarks.

  16. Adaptive Estimation of Heteroscedastic Money Demand Model of Pakistan

    Directory of Open Access Journals (Sweden)

    Muhammad Aslam

    2007-07-01

    Full Text Available For the problem of estimation of Money demand model of Pakistan, money supply (M1 shows heteroscedasticity of the unknown form. For estimation of such model we compare two adaptive estimators with ordinary least squares estimator and show the attractive performance of the adaptive estimators, namely, nonparametric kernel estimator and nearest neighbour regression estimator. These comparisons are made on the basis standard errors of the estimated coefficients, standard error of regression, Akaike Information Criteria (AIC value, and the Durban-Watson statistic for autocorrelation. We further show that nearest neighbour regression estimator performs better when comparing with the other nonparametric kernel estimator.

  17. Estimation of Model Parameters for Steerable Needles

    Science.gov (United States)

    Park, Wooram; Reed, Kyle B.; Okamura, Allison M.; Chirikjian, Gregory S.

    2010-01-01

    Flexible needles with bevel tips are being developed as useful tools for minimally invasive surgery and percutaneous therapy. When such a needle is inserted into soft tissue, it bends due to the asymmetric geometry of the bevel tip. This insertion with bending is not completely repeatable. We characterize the deviations in needle tip pose (position and orientation) by performing repeated needle insertions into artificial tissue. The base of the needle is pushed at a constant speed without rotating, and the covariance of the distribution of the needle tip pose is computed from experimental data. We develop the closed-form equations to describe how the covariance varies with different model parameters. We estimate the model parameters by matching the closed-form covariance and the experimentally obtained covariance. In this work, we use a needle model modified from a previously developed model with two noise parameters. The modified needle model uses three noise parameters to better capture the stochastic behavior of the needle insertion. The modified needle model provides an improvement of the covariance error from 26.1% to 6.55%. PMID:21643451

  18. Estimation of Model Parameters for Steerable Needles.

    Science.gov (United States)

    Park, Wooram; Reed, Kyle B; Okamura, Allison M; Chirikjian, Gregory S

    2010-01-01

    Flexible needles with bevel tips are being developed as useful tools for minimally invasive surgery and percutaneous therapy. When such a needle is inserted into soft tissue, it bends due to the asymmetric geometry of the bevel tip. This insertion with bending is not completely repeatable. We characterize the deviations in needle tip pose (position and orientation) by performing repeated needle insertions into artificial tissue. The base of the needle is pushed at a constant speed without rotating, and the covariance of the distribution of the needle tip pose is computed from experimental data. We develop the closed-form equations to describe how the covariance varies with different model parameters. We estimate the model parameters by matching the closed-form covariance and the experimentally obtained covariance. In this work, we use a needle model modified from a previously developed model with two noise parameters. The modified needle model uses three noise parameters to better capture the stochastic behavior of the needle insertion. The modified needle model provides an improvement of the covariance error from 26.1% to 6.55%.

  19. PARAMETER ESTIMATION IN BREAD BAKING MODEL

    Directory of Open Access Journals (Sweden)

    Hadiyanto Hadiyanto

    2012-05-01

    Full Text Available Bread product quality is highly dependent to the baking process. A model for the development of product quality, which was obtained by using quantitative and qualitative relationships, was calibrated by experiments at a fixed baking temperature of 200°C alone and in combination with 100 W microwave powers. The model parameters were estimated in a stepwise procedure i.e. first, heat and mass transfer related parameters, then the parameters related to product transformations and finally product quality parameters. There was a fair agreement between the calibrated model results and the experimental data. The results showed that the applied simple qualitative relationships for quality performed above expectation. Furthermore, it was confirmed that the microwave input is most meaningful for the internal product properties and not for the surface properties as crispness and color. The model with adjusted parameters was applied in a quality driven food process design procedure to derive a dynamic operation pattern, which was subsequently tested experimentally to calibrate the model. Despite the limited calibration with fixed operation settings, the model predicted well on the behavior under dynamic convective operation and on combined convective and microwave operation. It was expected that the suitability between model and baking system could be improved further by performing calibration experiments at higher temperature and various microwave power levels.  Abstrak  PERKIRAAN PARAMETER DALAM MODEL UNTUK PROSES BAKING ROTI. Kualitas produk roti sangat tergantung pada proses baking yang digunakan. Suatu model yang telah dikembangkan dengan metode kualitatif dan kuantitaif telah dikalibrasi dengan percobaan pada temperatur 200oC dan dengan kombinasi dengan mikrowave pada 100 Watt. Parameter-parameter model diestimasi dengan prosedur bertahap yaitu pertama, parameter pada model perpindahan masa dan panas, parameter pada model transformasi, dan

  20. Robust estimation of hydrological model parameters

    Directory of Open Access Journals (Sweden)

    A. Bárdossy

    2008-11-01

    Full Text Available The estimation of hydrological model parameters is a challenging task. With increasing capacity of computational power several complex optimization algorithms have emerged, but none of the algorithms gives a unique and very best parameter vector. The parameters of fitted hydrological models depend upon the input data. The quality of input data cannot be assured as there may be measurement errors for both input and state variables. In this study a methodology has been developed to find a set of robust parameter vectors for a hydrological model. To see the effect of observational error on parameters, stochastically generated synthetic measurement errors were applied to observed discharge and temperature data. With this modified data, the model was calibrated and the effect of measurement errors on parameters was analysed. It was found that the measurement errors have a significant effect on the best performing parameter vector. The erroneous data led to very different optimal parameter vectors. To overcome this problem and to find a set of robust parameter vectors, a geometrical approach based on Tukey's half space depth was used. The depth of the set of N randomly generated parameters was calculated with respect to the set with the best model performance (Nash-Sutclife efficiency was used for this study for each parameter vector. Based on the depth of parameter vectors, one can find a set of robust parameter vectors. The results show that the parameters chosen according to the above criteria have low sensitivity and perform well when transfered to a different time period. The method is demonstrated on the upper Neckar catchment in Germany. The conceptual HBV model was used for this study.

  1. Shape parameter estimate for a glottal model without time position

    OpenAIRE

    Degottex, Gilles; Roebel, Axel; Rodet, Xavier

    2009-01-01

    cote interne IRCAM: Degottex09a; None / None; National audience; From a recorded speech signal, we propose to estimate a shape parameter of a glottal model without estimating his time position. Indeed, the literature usually propose to estimate the time position first (ex. by detecting Glottal Closure Instants). The vocal-tract filter estimate is expressed as a minimum-phase envelope estimation after removing the glottal model and a standard lips radiation model. Since this filter is mainly b...

  2. AMEM-ADL Polymer Migration Estimation Model User's Guide

    Science.gov (United States)

    The user's guide of the Arthur D. Little Polymer Migration Estimation Model (AMEM) provides the information on how the model estimates the fraction of a chemical additive that diffuses through polymeric matrices.

  3. Weibull Parameters Estimation Based on Physics of Failure Model

    DEFF Research Database (Denmark)

    Kostandyan, Erik; Sørensen, John Dalsgaard

    2012-01-01

    Reliability estimation procedures are discussed for the example of fatigue development in solder joints using a physics of failure model. The accumulated damage is estimated based on a physics of failure model, the Rainflow counting algorithm and the Miner’s rule. A threshold model is used...... distribution. Methods from structural reliability analysis are used to model the uncertainties and to assess the reliability for fatigue failure. Maximum Likelihood and Least Square estimation techniques are used to estimate fatigue life distribution parameters....

  4. STRONGLY CONSISTENT ESTIMATION FOR A MULTIVARIATE LINEAR RELATIONSHIP MODEL WITH ESTIMATED COVARIANCES MATRIX

    Institute of Scientific and Technical Information of China (English)

    Yee LEUNG; WU Kefa; DONG Tianxin

    2001-01-01

    In this paper, a multivariate linear functional relationship model, where the covariance matrix of the observational errors is not restricted, is considered. The parameter estimation of this model is discussed. The estimators are shown to be a strongly consistent estimation under some mild conditions on the incidental parameters.

  5. Modeling Uncertainty when Estimating IT Projects Costs

    OpenAIRE

    Winter, Michel; Mirbel, Isabelle; Crescenzo, Pierre

    2014-01-01

    In the current economic context, optimizing projects' cost is an obligation for a company to remain competitive in its market. Introducing statistical uncertainty in cost estimation is a good way to tackle the risk of going too far while minimizing the project budget: it allows the company to determine the best possible trade-off between estimated cost and acceptable risk. In this paper, we present new statistical estimators derived from the way IT companies estimate the projects' costs. In t...

  6. Benefit Estimation Model for Tourist Spaceflights

    Science.gov (United States)

    Goehlich, Robert A.

    2003-01-01

    It is believed that the only potential means for significant reduction of the recurrent launch cost, which results in a stimulation of human space colonization, is to make the launcher reusable, to increase its reliability, and to make it suitable for new markets such as mass space tourism. But such space projects, that have long range aspects are very difficult to finance, because even politicians would like to see a reasonable benefit during their term in office, because they want to be able to explain this investment to the taxpayer. This forces planners to use benefit models instead of intuitive judgement to convince sceptical decision-makers to support new investments in space. Benefit models provide insights into complex relationships and force a better definition of goals. A new approach is introduced in the paper that allows to estimate the benefits to be expected from a new space venture. The main objective why humans should explore space is determined in this study to ``improve the quality of life''. This main objective is broken down in sub objectives, which can be analysed with respect to different interest groups. Such interest groups are the operator of a space transportation system, the passenger, and the government. For example, the operator is strongly interested in profit, while the passenger is mainly interested in amusement, while the government is primarily interested in self-esteem and prestige. This leads to different individual satisfactory levels, which are usable for the optimisation process of reusable launch vehicles.

  7. USER STORY SOFTWARE ESTIMATION:A SIMPLIFICATION OF SOFTWARE ESTIMATION MODEL WITH DISTRIBUTED EXTREME PROGRAMMING ESTIMATION TECHNIQUE

    OpenAIRE

    Ridi Ferdiana; Paulus Insap Santoso; Lukito Edi Nugroho; Ahmad Ashari

    2011-01-01

    Software estimation is an area of software engineering concerned with the identification, classification and measurement of features of software that affect the cost of developing and sustaining computer programs [19]. Measuring the software through software estimation has purpose to know the complexity of the software, estimate the human resources, and get better visibility of execution and process model. There is a lot of software estimation that work sufficiently in certain conditions or s...

  8. USER STORY SOFTWARE ESTIMATION:A SIMPLIFICATION OF SOFTWARE ESTIMATION MODEL WITH DISTRIBUTED EXTREME PROGRAMMING ESTIMATION TECHNIQUE

    Directory of Open Access Journals (Sweden)

    Ridi Ferdiana

    2011-01-01

    Full Text Available Software estimation is an area of software engineering concerned with the identification, classification and measurement of features of software that affect the cost of developing and sustaining computer programs [19]. Measuring the software through software estimation has purpose to know the complexity of the software, estimate the human resources, and get better visibility of execution and process model. There is a lot of software estimation that work sufficiently in certain conditions or step in software engineering for example measuring line of codes, function point, COCOMO, or use case points. This paper proposes another estimation technique called Distributed eXtreme Programming Estimation (DXP Estimation. DXP estimation provides a basic technique for the team that using eXtreme Programming method in onsite or distributed development. According to writer knowledge this is a first estimation technique that applied into agile method in eXtreme Programming.

  9. On estimation of survival function under random censoring model

    Institute of Scientific and Technical Information of China (English)

    JIANG; Jiancheng(蒋建成); CHENG; Bo(程博); WU; Xizhi(吴喜之)

    2002-01-01

    We study an estimator of the survival function under the random censoring model. Bahadur-type representation of the estimator is obtained and asymptotic expression for its mean squared errors is given, which leads to the consistency and asymptotic normality of the estimator. A data-driven local bandwidth selection rule for the estimator is proposed. It is worth noting that the estimator is consistent at left boundary points, which contrasts with the cases of density and hazard rate estimation. A Monte Carlo comparison of different estimators is made and it appears that the proposed data-driven estimators have certain advantages over the common Kaplan-Meier estmator.

  10. Consistent estimators in random censorship semiparametric models

    Institute of Scientific and Technical Information of China (English)

    王启华

    1996-01-01

    For the fixed design regression modelwhen Y, are randomly censored on the right, the estimators of unknown parameter and regression function g from censored observations are defined in the two cases .where the censored distribution is known and unknown, respectively. Moreover, the sufficient conditions under which these estimators are strongly consistent and pth (p>2) mean consistent are also established.

  11. Estimation of Wind Turbulence Using Spectral Models

    DEFF Research Database (Denmark)

    Soltani, Mohsen; Knudsen, Torben; Bak, Thomas

    2011-01-01

    The production and loading of wind farms are significantly influenced by the turbulence of the flowing wind field. Estimation of turbulence allows us to optimize the performance of the wind farm. Turbulence estimation is; however, highly challenging due to the chaotic behavior of the wind. In thi...

  12. A Note on Structural Equation Modeling Estimates of Reliability

    Science.gov (United States)

    Yang, Yanyun; Green, Samuel B.

    2010-01-01

    Reliability can be estimated using structural equation modeling (SEM). Two potential problems with this approach are that estimates may be unstable with small sample sizes and biased with misspecified models. A Monte Carlo study was conducted to investigate the quality of SEM estimates of reliability by themselves and relative to coefficient…

  13. Radiation risk estimation based on measurement error models

    CERN Document Server

    Masiuk, Sergii; Shklyar, Sergiy; Chepurny, Mykola; Likhtarov, Illya

    2017-01-01

    This monograph discusses statistics and risk estimates applied to radiation damage under the presence of measurement errors. The first part covers nonlinear measurement error models, with a particular emphasis on efficiency of regression parameter estimators. In the second part, risk estimation in models with measurement errors is considered. Efficiency of the methods presented is verified using data from radio-epidemiological studies.

  14. Parameter estimation of hidden periodic model in random fields

    Institute of Scientific and Technical Information of China (English)

    何书元

    1999-01-01

    Two-dimensional hidden periodic model is an important model in random fields. The model is used in the field of two-dimensional signal processing, prediction and spectral analysis. A method of estimating the parameters for the model is designed. The strong consistency of the estimators is proved.

  15. Efficient estimation of semiparametric copula models for bivariate survival data

    KAUST Repository

    Cheng, Guang

    2014-01-01

    A semiparametric copula model for bivariate survival data is characterized by a parametric copula model of dependence and nonparametric models of two marginal survival functions. Efficient estimation for the semiparametric copula model has been recently studied for the complete data case. When the survival data are censored, semiparametric efficient estimation has only been considered for some specific copula models such as the Gaussian copulas. In this paper, we obtain the semiparametric efficiency bound and efficient estimation for general semiparametric copula models for possibly censored data. We construct an approximate maximum likelihood estimator by approximating the log baseline hazard functions with spline functions. We show that our estimates of the copula dependence parameter and the survival functions are asymptotically normal and efficient. Simple consistent covariance estimators are also provided. Numerical results are used to illustrate the finite sample performance of the proposed estimators. © 2013 Elsevier Inc.

  16. Improved variance estimation of maximum likelihood estimators in stable first-order dynamic regression models

    NARCIS (Netherlands)

    Kiviet, J.F.; Phillips, G.D.A.

    2014-01-01

    In dynamic regression models conditional maximum likelihood (least-squares) coefficient and variance estimators are biased. Using expansion techniques an approximation is obtained to the bias in variance estimation yielding a bias corrected variance estimator. This is achieved for both the standard

  17. Estimation of Stochastic Volatility Models by Nonparametric Filtering

    DEFF Research Database (Denmark)

    Kanaya, Shin; Kristensen, Dennis

    2016-01-01

    /estimated volatility process replacing the latent process. Our estimation strategy is applicable to both parametric and nonparametric stochastic volatility models, and can handle both jumps and market microstructure noise. The resulting estimators of the stochastic volatility model will carry additional biases......A two-step estimation method of stochastic volatility models is proposed: In the first step, we nonparametrically estimate the (unobserved) instantaneous volatility process. In the second step, standard estimation methods for fully observed diffusion processes are employed, but with the filtered...... and variances due to the first-step estimation, but under regularity conditions we show that these vanish asymptotically and our estimators inherit the asymptotic properties of the infeasible estimators based on observations of the volatility process. A simulation study examines the finite-sample properties...

  18. Mathematical model of transmission network static state estimation

    Directory of Open Access Journals (Sweden)

    Ivanov Aleksandar

    2012-01-01

    Full Text Available In this paper the characteristics and capabilities of the power transmission network static state estimator are presented. The solving process of the mathematical model containing the measurement errors and their processing is developed. To evaluate difference between the general model of state estimation and the fast decoupled state estimation model, the both models are applied to an example, and so derived results are compared.

  19. Estimation in the polynomial errors-in-variables model

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Estimators are presented for the coefficients of the polynomial errors-in-variables (EV) model when replicated observations are taken at some experimental points. These estimators are shown to be strongly consistent under mild conditions.

  20. Bayesian Estimation of Categorical Dynamic Factor Models

    Science.gov (United States)

    Zhang, Zhiyong; Nesselroade, John R.

    2007-01-01

    Dynamic factor models have been used to analyze continuous time series behavioral data. We extend 2 main dynamic factor model variations--the direct autoregressive factor score (DAFS) model and the white noise factor score (WNFS) model--to categorical DAFS and WNFS models in the framework of the underlying variable method and illustrate them with…

  1. Simultaneous estimation of parameters in the bivariate Emax model.

    Science.gov (United States)

    Magnusdottir, Bergrun T; Nyquist, Hans

    2015-12-10

    In this paper, we explore inference in multi-response, nonlinear models. By multi-response, we mean models with m > 1 response variables and accordingly m relations. Each parameter/explanatory variable may appear in one or more of the relations. We study a system estimation approach for simultaneous computation and inference of the model and (co)variance parameters. For illustration, we fit a bivariate Emax model to diabetes dose-response data. Further, the bivariate Emax model is used in a simulation study that compares the system estimation approach to equation-by-equation estimation. We conclude that overall, the system estimation approach performs better for the bivariate Emax model when there are dependencies among relations. The stronger the dependencies, the more we gain in precision by using system estimation rather than equation-by-equation estimation.

  2. A Prototypical Model for Estimating High Tech Navy Recruiting Markets

    Science.gov (United States)

    1991-12-01

    Probability, Logit, and Probit Models, New York, New York 1990, p. 73. 37 Gujarati , D., ibid, p. 500. 31 V. MODELS ESTIMATION A. MODEL I ESTIMATION OF...Company, New York, N.Y., 1990. Gujarati , Damodar N., Basic Econometrics, Second Edition, McGraw-Hill Book Company, New York, N.Y., 1988. Jehn, Christopher

  3. Identification and Estimation of Exchange Rate Models with Unobservable Fundamentals

    NARCIS (Netherlands)

    Chambers, M.J.; McCrorie, J.R.

    2004-01-01

    This paper is concerned with issues of model specification, identification, and estimation in exchange rate models with unobservable fundamentals.We show that the model estimated by Gardeazabal, Reg´ulez and V´azquez (International Economic Review, 1997) is not identified and demonstrate how to spec

  4. Bayesian approach to decompression sickness model parameter estimation.

    Science.gov (United States)

    Howle, L E; Weber, P W; Nichols, J M

    2017-03-01

    We examine both maximum likelihood and Bayesian approaches for estimating probabilistic decompression sickness model parameters. Maximum likelihood estimation treats parameters as fixed values and determines the best estimate through repeated trials, whereas the Bayesian approach treats parameters as random variables and determines the parameter probability distributions. We would ultimately like to know the probability that a parameter lies in a certain range rather than simply make statements about the repeatability of our estimator. Although both represent powerful methods of inference, for models with complex or multi-peaked likelihoods, maximum likelihood parameter estimates can prove more difficult to interpret than the estimates of the parameter distributions provided by the Bayesian approach. For models of decompression sickness, we show that while these two estimation methods are complementary, the credible intervals generated by the Bayesian approach are more naturally suited to quantifying uncertainty in the model parameters.

  5. Estimation of Boundary Conditions for Coastal Models,

    Science.gov (United States)

    1974-09-01

    equation: h(i) y ( t — i) di (3) The solution to Eq. (3) may be obtained by Fourier transformation. Because covariance function and spectral density function form...the cross— spectral density function estimate by a numerical Fourier transform, the even and odd parts of the cross—covariance function are determined...by A(k) = ½ [Y ~~ (k) + y (k)] (5) B(k) = ½ [Yxy (k) - y (k) ] (6) from which the co— spectral density function is estimated : k m—l -. C (f) = 2T[A(o

  6. Parameter estimation and error analysis in environmental modeling and computation

    Science.gov (United States)

    Kalmaz, E. E.

    1986-01-01

    A method for the estimation of parameters and error analysis in the development of nonlinear modeling for environmental impact assessment studies is presented. The modular computer program can interactively fit different nonlinear models to the same set of data, dynamically changing the error structure associated with observed values. Parameter estimation techniques and sequential estimation algorithms employed in parameter identification and model selection are first discussed. Then, least-square parameter estimation procedures are formulated, utilizing differential or integrated equations, and are used to define a model for association of error with experimentally observed data.

  7. On Frequency Domain Models for TDOA Estimation

    DEFF Research Database (Denmark)

    Jensen, Jesper Rindom; Nielsen, Jesper Kjær; Christensen, Mads Græsbøll

    2015-01-01

    of a much more general method. In this connection, we establish the conditions under which the cross-correlation method is a statistically efficient estimator. One of the conditions is that the source signal is periodic with a known fundamental frequency of 2π/N radians per sample, where N is the number...

  8. Robust Estimation and Forecasting of the Capital Asset Pricing Model

    NARCIS (Netherlands)

    G. Bian (Guorui); M.J. McAleer (Michael); W-K. Wong (Wing-Keung)

    2013-01-01

    textabstractIn this paper, we develop a modified maximum likelihood (MML) estimator for the multiple linear regression model with underlying student t distribution. We obtain the closed form of the estimators, derive the asymptotic properties, and demonstrate that the MML estimator is more

  9. Robust Estimation and Forecasting of the Capital Asset Pricing Model

    NARCIS (Netherlands)

    G. Bian (Guorui); M.J. McAleer (Michael); W-K. Wong (Wing-Keung)

    2010-01-01

    textabstractIn this paper, we develop a modified maximum likelihood (MML) estimator for the multiple linear regression model with underlying student t distribution. We obtain the closed form of the estimators, derive the asymptotic properties, and demonstrate that the MML estimator is more

  10. Performance of Random Effects Model Estimators under Complex Sampling Designs

    Science.gov (United States)

    Jia, Yue; Stokes, Lynne; Harris, Ian; Wang, Yan

    2011-01-01

    In this article, we consider estimation of parameters of random effects models from samples collected via complex multistage designs. Incorporation of sampling weights is one way to reduce estimation bias due to unequal probabilities of selection. Several weighting methods have been proposed in the literature for estimating the parameters of…

  11. PARAMETER ESTIMATION IN LINEAR REGRESSION MODELS FOR LONGITUDINAL CONTAMINATED DATA

    Institute of Scientific and Technical Information of China (English)

    QianWeimin; LiYumei

    2005-01-01

    The parameter estimation and the coefficient of contamination for the regression models with repeated measures are studied when its response variables are contaminated by another random variable sequence. Under the suitable conditions it is proved that the estimators which are established in the paper are strongly consistent estimators.

  12. Maximum likelihood estimation of finite mixture model for economic data

    Science.gov (United States)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-06-01

    Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.

  13. Insights on the role of accurate state estimation in coupled model parameter estimation by a conceptual climate model study

    Science.gov (United States)

    Yu, Xiaolin; Zhang, Shaoqing; Lin, Xiaopei; Li, Mingkui

    2017-03-01

    The uncertainties in values of coupled model parameters are an important source of model bias that causes model climate drift. The values can be calibrated by a parameter estimation procedure that projects observational information onto model parameters. The signal-to-noise ratio of error covariance between the model state and the parameter being estimated directly determines whether the parameter estimation succeeds or not. With a conceptual climate model that couples the stochastic atmosphere and slow-varying ocean, this study examines the sensitivity of state-parameter covariance on the accuracy of estimated model states in different model components of a coupled system. Due to the interaction of multiple timescales, the fast-varying atmosphere with a chaotic nature is the major source of the inaccuracy of estimated state-parameter covariance. Thus, enhancing the estimation accuracy of atmospheric states is very important for the success of coupled model parameter estimation, especially for the parameters in the air-sea interaction processes. The impact of chaotic-to-periodic ratio in state variability on parameter estimation is also discussed. This simple model study provides a guideline when real observations are used to optimize model parameters in a coupled general circulation model for improving climate analysis and predictions.

  14. Efficient estimation of moments in linear mixed models

    CERN Document Server

    Wu, Ping; Zhu, Li-Xing; 10.3150/10-BEJ330

    2012-01-01

    In the linear random effects model, when distributional assumptions such as normality of the error variables cannot be justified, moments may serve as alternatives to describe relevant distributions in neighborhoods of their means. Generally, estimators may be obtained as solutions of estimating equations. It turns out that there may be several equations, each of them leading to consistent estimators, in which case finding the efficient estimator becomes a crucial problem. In this paper, we systematically study estimation of moments of the errors and random effects in linear mixed models.

  15. Obtaining Diagnostic Classification Model Estimates Using Mplus

    Science.gov (United States)

    Templin, Jonathan; Hoffman, Lesa

    2013-01-01

    Diagnostic classification models (aka cognitive or skills diagnosis models) have shown great promise for evaluating mastery on a multidimensional profile of skills as assessed through examinee responses, but continued development and application of these models has been hindered by a lack of readily available software. In this article we…

  16. Lag space estimation in time series modelling

    DEFF Research Database (Denmark)

    Goutte, Cyril

    1997-01-01

    The purpose of this article is to investigate some techniques for finding the relevant lag-space, i.e. input information, for time series modelling. This is an important aspect of time series modelling, as it conditions the design of the model through the regressor vector a.k.a. the input layer...

  17. Highway traffic model-based density estimation

    OpenAIRE

    Morarescu, Irinel - Constantin; CANUDAS DE WIT, Carlos

    2011-01-01

    International audience; The travel time spent in traffic networks is one of the main concerns of the societies in developed countries. A major requirement for providing traffic control and services is the continuous prediction, for several minutes into the future. This paper focuses on an important ingredient necessary for the traffic forecasting which is the real-time traffic state estimation using only a limited amount of data. Simulation results illustrate the performances of the proposed ...

  18. Modeling and Parameter Estimation of a Small Wind Generation System

    Directory of Open Access Journals (Sweden)

    Carlos A. Ramírez Gómez

    2013-11-01

    Full Text Available The modeling and parameter estimation of a small wind generation system is presented in this paper. The system consists of a wind turbine, a permanent magnet synchronous generator, a three phase rectifier, and a direct current load. In order to estimate the parameters wind speed data are registered in a weather station located in the Fraternidad Campus at ITM. Wind speed data were applied to a reference model programed with PSIM software. From that simulation, variables were registered to estimate the parameters. The wind generation system model together with the estimated parameters is an excellent representation of the detailed model, but the estimated model offers a higher flexibility than the programed model in PSIM software.

  19. Projection-type estimation for varying coefficient regression models

    CERN Document Server

    Lee, Young K; Park, Byeong U; 10.3150/10-BEJ331

    2012-01-01

    In this paper we introduce new estimators of the coefficient functions in the varying coefficient regression model. The proposed estimators are obtained by projecting the vector of the full-dimensional kernel-weighted local polynomial estimators of the coefficient functions onto a Hilbert space with a suitable norm. We provide a backfitting algorithm to compute the estimators. We show that the algorithm converges at a geometric rate under weak conditions. We derive the asymptotic distributions of the estimators and show that the estimators have the oracle properties. This is done for the general order of local polynomial fitting and for the estimation of the derivatives of the coefficient functions, as well as the coefficient functions themselves. The estimators turn out to have several theoretical and numerical advantages over the marginal integration estimators studied by Yang, Park, Xue and H\\"{a}rdle [J. Amer. Statist. Assoc. 101 (2006) 1212--1227].

  20. Estimation in partial linear EV models with replicated observations

    Institute of Scientific and Technical Information of China (English)

    CUI; Hengjian

    2004-01-01

    The aim of this work is to construct the parameter estimators in the partial linear errors-in-variables (EV) models and explore their asymptotic properties. Unlike other related References, the assumption of known error covariance matrix is removed when the sample can be repeatedly drawn at each designed point from the model. The estimators of interested regression parameters, and the model error variance, as well as the nonparametric function, are constructed. Under some regular conditions, all of the estimators prove strongly consistent. Meanwhile, the asymptotic normality for the estimator of regression parameter is also presented. A simulation study is reported to illustrate our asymptotic results.

  1. A simulation of water pollution model parameter estimation

    Science.gov (United States)

    Kibler, J. F.

    1976-01-01

    A parameter estimation procedure for a water pollution transport model is elaborated. A two-dimensional instantaneous-release shear-diffusion model serves as representative of a simple transport process. Pollution concentration levels are arrived at via modeling of a remote-sensing system. The remote-sensed data are simulated by adding Gaussian noise to the concentration level values generated via the transport model. Model parameters are estimated from the simulated data using a least-squares batch processor. Resolution, sensor array size, and number and location of sensor readings can be found from the accuracies of the parameter estimates.

  2. Assessing Uncertainty of Interspecies Correlation Estimation Models for Aromatic Compounds

    Science.gov (United States)

    We developed Interspecies Correlation Estimation (ICE) models for aromatic compounds containing 1 to 4 benzene rings to assess uncertainty in toxicity extrapolation in two data compilation approaches. ICE models are mathematical relationships between surrogate and predicted test ...

  3. Estimation of a multivariate mean under model selection uncertainty

    Directory of Open Access Journals (Sweden)

    Georges Nguefack-Tsague

    2014-05-01

    Full Text Available Model selection uncertainty would occur if we selected a model based on one data set and subsequently applied it for statistical inferences, because the "correct" model would not be selected with certainty.  When the selection and inference are based on the same dataset, some additional problems arise due to the correlation of the two stages (selection and inference. In this paper model selection uncertainty is considered and model averaging is proposed. The proposal is related to the theory of James and Stein of estimating more than three parameters from independent normal observations. We suggest that a model averaging scheme taking into account the selection procedure could be more appropriate than model selection alone. Some properties of this model averaging estimator are investigated; in particular we show using Stein's results that it is a minimax estimator and can outperform Stein-type estimators.

  4. Estimation for the simple linear Boolean model

    OpenAIRE

    2006-01-01

    We consider the simple linear Boolean model, a fundamental coverage process also known as the Markov/General/infinity queue. In the model, line segments of independent and identically distributed length are located at the points of a Poisson process. The segments may overlap, resulting in a pattern of "clumps"-regions of the line that are covered by one or more segments-alternating with uncovered regions or "spacings". Study and application of the model have been impeded by the difficult...

  5. Bregman divergence as general framework to estimate unnormalized statistical models

    CERN Document Server

    Gutmann, Michael

    2012-01-01

    We show that the Bregman divergence provides a rich framework to estimate unnormalized statistical models for continuous or discrete random variables, that is, models which do not integrate or sum to one, respectively. We prove that recent estimation methods such as noise-contrastive estimation, ratio matching, and score matching belong to the proposed framework, and explain their interconnection based on supervised learning. Further, we discuss the role of boosting in unsupervised learning.

  6. Estimating Dynamic Equilibrium Models using Macro and Financial Data

    DEFF Research Database (Denmark)

    Christensen, Bent Jesper; Posch, Olaf; van der Wel, Michel

    We show that including financial market data at daily frequency, along with macro series at standard lower frequency, facilitates statistical inference on structural parameters in dynamic equilibrium models. Our continuous-time formulation conveniently accounts for the difference in observation...... of the estimators and estimate the model using 20 years of U.S. macro and financial data....

  7. CONSISTENCY OF LS ESTIMATOR IN SIMPLE LINEAR EV REGRESSION MODELS

    Institute of Scientific and Technical Information of China (English)

    Liu Jixue; Chen Xiru

    2005-01-01

    Consistency of LS estimate of simple linear EV model is studied. It is shown that under some common assumptions of the model, both weak and strong consistency of the estimate are equivalent but it is not so for quadratic-mean consistency.

  8. Estimated Frequency Domain Model Uncertainties used in Robust Controller Design

    DEFF Research Database (Denmark)

    Tøffner-Clausen, S.; Andersen, Palle; Stoustrup, Jakob;

    1994-01-01

    This paper deals with the combination of system identification and robust controller design. Recent results on estimation of frequency domain model uncertainty are......This paper deals with the combination of system identification and robust controller design. Recent results on estimation of frequency domain model uncertainty are...

  9. Estimating Lead (Pb) Bioavailability In A Mouse Model

    Science.gov (United States)

    Children are exposed to Pb through ingestion of Pb-contaminated soil. Soil Pb bioavailability is estimated using animal models or with chemically defined in vitro assays that measure bioaccessibility. However, bioavailability estimates in a large animal model (e.g., swine) can be...

  10. FUNCTIONAL-COEFFICIENT REGRESSION MODEL AND ITS ESTIMATION

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    In this paper,a class of functional-coefficient regression models is proposed and an estimation procedure based on the locally weighted least equares is suggested. This class of models,with the proposed estimation method,is a powerful means for exploratory data analysis.

  11. Estimating High-Dimensional Time Series Models

    DEFF Research Database (Denmark)

    Medeiros, Marcelo C.; Mendes, Eduardo F.

    We study the asymptotic properties of the Adaptive LASSO (adaLASSO) in sparse, high-dimensional, linear time-series models. We assume both the number of covariates in the model and candidate variables can increase with the number of observations and the number of candidate variables is, possibly...

  12. Estimates of current debris from flux models

    Energy Technology Data Exchange (ETDEWEB)

    Canavan, G.H.

    1997-01-01

    Flux models that balance accuracy and simplicity are used to predict the growth of space debris to the present. Known and projected launch rates, decay models, and numerical integrations are used to predict distributions that closely resemble the current catalog-particularly in the regions containing most of the debris.

  13. Two-stage local M-estimation of additive models

    Institute of Scientific and Technical Information of China (English)

    JIANG JianCheng; LI JianTao

    2008-01-01

    This paper studies local M-estimation of the nonparametric components of additive models. A two-stage local M-estimation procedure is proposed for estimating the additive components and their derivatives. Under very mild conditions, the proposed estimators of each additive component and its derivative are jointly asymptotically normal and share the same asymptotic distributions as they would be if the other components were known. The established asymptotic results also hold for two particular local M-estimations: the local least squares and least absolute deviation estimations. However,for general two-stage local M-estimation with continuous and nonlinear ψ-functions, its implementation is time-consuming. To reduce the computational burden, one-step approximations to the two-stage local M-estimators are developed. The one-step estimators are shown to achieve the same efficiency as the fully iterative two-stage local M-estimators, which makes the two-stage local M-estimation more feasible in practice. The proposed estimators inherit the advantages and at the same time overcome the disadvantages of the local least-squares based smoothers. In addition, the practical implementation of the proposed estimation is considered in details. Simulations demonstrate the merits of the two-stage local M-estimation, and a real example illustrates the performance of the methodology.

  14. Two-stage local M-estimation of additive models

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    This paper studies local M-estimation of the nonparametric components of additive models.A two-stage local M-estimation procedure is proposed for estimating the additive components and their derivatives.Under very mild conditions,the proposed estimators of each additive component and its derivative are jointly asymptotically normal and share the same asymptotic distributions as they would be if the other components were known.The established asymptotic results also hold for two particular local M-estimations:the local least squares and least absolute deviation estimations.However,for general two-stage local M-estimation with continuous and nonlinear ψ-functions,its implementation is time-consuming.To reduce the computational burden,one-step approximations to the two-stage local M-estimators are developed.The one-step estimators are shown to achieve the same effciency as the fully iterative two-stage local M-estimators,which makes the two-stage local M-estimation more feasible in practice.The proposed estimators inherit the advantages and at the same time overcome the disadvantages of the local least-squares based smoothers.In addition,the practical implementation of the proposed estimation is considered in details.Simulations demonstrate the merits of the two-stage local M-estimation,and a real example illustrates the performance of the methodology.

  15. ESTIMATION DU MODELE LINEAIRE GENERALISE ET APPLICATION

    Directory of Open Access Journals (Sweden)

    Malika CHIKHI

    2012-06-01

    Full Text Available Cet article présente  le modèle linéaire généralisé englobant les  techniques de modélisation telles que la régression linéaire, la régression logistique, la régression  log linéaire et la régression  de Poisson . On Commence par la présentation des modèles  des lois exponentielles pour ensuite estimer les paramètres du modèle par la méthode du maximum de vraisemblance. Par la suite on teste les coefficients du modèle pour voir leurs significations et leurs intervalles de confiances, en utilisant le test de Wald qui porte sur la signification  de la vraie valeur du paramètre  basé sur l'estimation de l'échantillon.

  16. Model-based Small Area Estimates of Cancer Risk Factors and Screening Behaviors - Small Area Estimates

    Science.gov (United States)

    These model-based estimates use two surveys, the Behavioral Risk Factor Surveillance System (BRFSS) and the National Health Interview Survey (NHIS). The two surveys are combined using novel statistical methodology.

  17. Estimating parameters for generalized mass action models with connectivity information

    Directory of Open Access Journals (Sweden)

    Voit Eberhard O

    2009-05-01

    Full Text Available Abstract Background Determining the parameters of a mathematical model from quantitative measurements is the main bottleneck of modelling biological systems. Parameter values can be estimated from steady-state data or from dynamic data. The nature of suitable data for these two types of estimation is rather different. For instance, estimations of parameter values in pathway models, such as kinetic orders, rate constants, flux control coefficients or elasticities, from steady-state data are generally based on experiments that measure how a biochemical system responds to small perturbations around the steady state. In contrast, parameter estimation from dynamic data requires time series measurements for all dependent variables. Almost no literature has so far discussed the combined use of both steady-state and transient data for estimating parameter values of biochemical systems. Results In this study we introduce a constrained optimization method for estimating parameter values of biochemical pathway models using steady-state information and transient measurements. The constraints are derived from the flux connectivity relationships of the system at the steady state. Two case studies demonstrate the estimation results with and without flux connectivity constraints. The unconstrained optimal estimates from dynamic data may fit the experiments well, but they do not necessarily maintain the connectivity relationships. As a consequence, individual fluxes may be misrepresented, which may cause problems in later extrapolations. By contrast, the constrained estimation accounting for flux connectivity information reduces this misrepresentation and thereby yields improved model parameters. Conclusion The method combines transient metabolic profiles and steady-state information and leads to the formulation of an inverse parameter estimation task as a constrained optimization problem. Parameter estimation and model selection are simultaneously carried out

  18. Recharge estimation for transient ground water modeling.

    Science.gov (United States)

    Jyrkama, Mikko I; Sykes, Jon F; Normani, Stefano D

    2002-01-01

    Reliable ground water models require both an accurate physical representation of the system and appropriate boundary conditions. While physical attributes are generally considered static, boundary conditions, such as ground water recharge rates, can be highly variable in both space and time. A practical methodology incorporating the hydrologic model HELP3 in conjunction with a geographic information system was developed to generate a physically based and highly detailed recharge boundary condition for ground water modeling. The approach uses daily precipitation and temperature records in addition to land use/land cover and soils data. The importance of the method in transient ground water modeling is demonstrated by applying it to a MODFLOW modeling study in New Jersey. In addition to improved model calibration, the results from the study clearly indicate the importance of using a physically based and highly detailed recharge boundary condition in ground water quality modeling, where the detailed knowledge of the evolution of the ground water flowpaths is imperative. The simulated water table is within 0.5 m of the observed values using the method, while the water levels can differ by as much as 2 m using uniform recharge conditions. The results also show that the combination of temperature and precipitation plays an important role in the amount and timing of recharge in cooler climates. A sensitivity analysis further reveals that increasing the leaf area index, the evaporative zone depth, or the curve number in the model will result in decreased recharge rates over time, with the curve number having the greatest impact.

  19. Comparison of Estimation Procedures for Multilevel AR(1 Models

    Directory of Open Access Journals (Sweden)

    Tanja eKrone

    2016-04-01

    Full Text Available To estimate a time series model for multiple individuals, a multilevel model may be used.In this paper we compare two estimation methods for the autocorrelation in Multilevel AR(1 models, namely Maximum Likelihood Estimation (MLE and Bayesian Markov Chain Monte Carlo.Furthermore, we examine the difference between modeling fixed and random individual parameters.To this end, we perform a simulation study with a fully crossed design, in which we vary the length of the time series (10 or 25, the number of individuals per sample (10 or 25, the mean of the autocorrelation (-0.6 to 0.6 inclusive, in steps of 0.3 and the standard deviation of the autocorrelation (0.25 or 0.40.We found that the random estimators of the population autocorrelation show less bias and higher power, compared to the fixed estimators. As expected, the random estimators profit strongly from a higher number of individuals, while this effect is small for the fixed estimators.The fixed estimators profit slightly more from a higher number of time points than the random estimators.When possible, random estimation is preferred to fixed estimation.The difference between MLE and Bayesian estimation is nearly negligible. The Bayesian estimation shows a smaller bias, but MLE shows a smaller variability (i.e., standard deviation of the parameter estimates.Finally, better results are found for a higher number of individuals and time points, and for a lower individual variability of the autocorrelation. The effect of the size of the autocorrelation differs between outcome measures.

  20. Adaptive Unified Biased Estimators of Parameters in Linear Model

    Institute of Scientific and Technical Information of China (English)

    Hu Yang; Li-xing Zhu

    2004-01-01

    To tackle multi collinearity or ill-conditioned design matrices in linear models,adaptive biased estimators such as the time-honored Stein estimator,the ridge and the principal component estimators have been studied intensively.To study when a biased estimator uniformly outperforms the least squares estimator,some suficient conditions are proposed in the literature.In this paper,we propose a unified framework to formulate a class of adaptive biased estimators.This class includes all existing biased estimators and some new ones.A suficient condition for outperforming the least squares estimator is proposed.In terms of selecting parameters in the condition,we can obtain all double-type conditions in the literature.

  1. A Dynamic Travel Time Estimation Model Based on Connected Vehicles

    Directory of Open Access Journals (Sweden)

    Daxin Tian

    2015-01-01

    Full Text Available With advances in connected vehicle technology, dynamic vehicle route guidance models gradually become indispensable equipment for drivers. Traditional route guidance models are designed to direct a vehicle along the shortest path from the origin to the destination without considering the dynamic traffic information. In this paper a dynamic travel time estimation model is presented which can collect and distribute traffic data based on the connected vehicles. To estimate the real-time travel time more accurately, a road link dynamic dividing algorithm is proposed. The efficiency of the model is confirmed by simulations, and the experiment results prove the effectiveness of the travel time estimation method.

  2. Parameters Estimation of Geographically Weighted Ordinal Logistic Regression (GWOLR) Model

    Science.gov (United States)

    Zuhdi, Shaifudin; Retno Sari Saputro, Dewi; Widyaningsih, Purnami

    2017-06-01

    A regression model is the representation of relationship between independent variable and dependent variable. The dependent variable has categories used in the logistic regression model to calculate odds on. The logistic regression model for dependent variable has levels in the logistics regression model is ordinal. GWOLR model is an ordinal logistic regression model influenced the geographical location of the observation site. Parameters estimation in the model needed to determine the value of a population based on sample. The purpose of this research is to parameters estimation of GWOLR model using R software. Parameter estimation uses the data amount of dengue fever patients in Semarang City. Observation units used are 144 villages in Semarang City. The results of research get GWOLR model locally for each village and to know probability of number dengue fever patient categories.

  3. Hospital Case Cost Estimates Modelling - Algorithm Comparison

    CERN Document Server

    Andru, Peter

    2008-01-01

    Ontario (Canada) Health System stakeholders support the idea and necessity of the integrated source of data that would include both clinical (e.g. diagnosis, intervention, length of stay, case mix group) and financial (e.g. cost per weighted case, cost per diem) characteristics of the Ontario healthcare system activities at the patient-specific level. At present, the actual patient-level case costs in the explicit form are not available in the financial databases for all hospitals. The goal of this research effort is to develop financial models that will assign each clinical case in the patient-specific data warehouse a dollar value, representing the cost incurred by the Ontario health care facility which treated the patient. Five mathematical models have been developed and verified using real dataset. All models can be classified into two groups based on their underlying method: 1. Models based on using relative intensity weights of the cases, and 2. Models based on using cost per diem.

  4. A regression model to estimate regional ground water recharge.

    Science.gov (United States)

    Lorenz, David L; Delin, Geoffrey N

    2007-01-01

    A regional regression model was developed to estimate the spatial distribution of ground water recharge in subhumid regions. The regional regression recharge (RRR) model was based on a regression of basin-wide estimates of recharge from surface water drainage basins, precipitation, growing degree days (GDD), and average basin specific yield (SY). Decadal average recharge, precipitation, and GDD were used in the RRR model. The RRR estimates were derived from analysis of stream base flow using a computer program that was based on the Rorabaugh method. As expected, there was a strong correlation between recharge and precipitation. The model was applied to statewide data in Minnesota. Where precipitation was least in the western and northwestern parts of the state (50 to 65 cm/year), recharge computed by the RRR model also was lowest (0 to 5 cm/year). A strong correlation also exists between recharge and SY. SY was least in areas where glacial lake clay occurs, primarily in the northwest part of the state; recharge estimates in these areas were in the 0- to 5-cm/year range. In sand-plain areas where SY is greatest, recharge estimates were in the 15- to 29-cm/year range on the basis of the RRR model. Recharge estimates that were based on the RRR model compared favorably with estimates made on the basis of other methods. The RRR model can be applied in other subhumid regions where region wide data sets of precipitation, streamflow, GDD, and soils data are available.

  5. Ballistic model to estimate microsprinkler droplet distribution

    Directory of Open Access Journals (Sweden)

    Conceição Marco Antônio Fonseca

    2003-01-01

    Full Text Available Experimental determination of microsprinkler droplets is difficult and time-consuming. This determination, however, could be achieved using ballistic models. The present study aimed to compare simulated and measured values of microsprinkler droplet diameters. Experimental measurements were made using the flour method, and simulations using a ballistic model adopted by the SIRIAS computational software. Drop diameters quantified in the experiment varied between 0.30 mm and 1.30 mm, while the simulated between 0.28 mm and 1.06 mm. The greatest differences between simulated and measured values were registered at the highest radial distance from the emitter. The model presented a performance classified as excellent for simulating microsprinkler drop distribution.

  6. Application of Bayesian Hierarchical Prior Modeling to Sparse Channel Estimation

    DEFF Research Database (Denmark)

    Pedersen, Niels Lovmand; Manchón, Carles Navarro; Shutin, Dmitriy

    2012-01-01

    . The estimators result as an application of the variational message-passing algorithm on the factor graph representing the signal model extended with the hierarchical prior models. Numerical results demonstrate the superior performance of our channel estimators as compared to traditional and state......Existing methods for sparse channel estimation typically provide an estimate computed as the solution maximizing an objective function defined as the sum of the log-likelihood function and a penalization term proportional to the l1-norm of the parameter of interest. However, other penalization......-of-the-art sparse methods....

  7. Estimation of the Heteroskedastic Canonical Contagion Model with Instrumental Variables

    Science.gov (United States)

    2016-01-01

    Knowledge of contagion among economies is a relevant issue in economics. The canonical model of contagion is an alternative in this case. Given the existence of endogenous variables in the model, instrumental variables can be used to decrease the bias of the OLS estimator. In the presence of heteroskedastic disturbances this paper proposes the use of conditional volatilities as instruments. Simulation is used to show that the homoscedastic and heteroskedastic estimators which use them as instruments have small bias. These estimators are preferable in comparison with the OLS estimator and their asymptotic distribution can be used to construct confidence intervals. PMID:28030628

  8. A new estimate of the parameters in linear mixed models

    Institute of Scientific and Technical Information of China (English)

    王松桂; 尹素菊

    2002-01-01

    In linear mixed models, there are two kinds of unknown parameters: one is the fixed effect, theother is the variance component. In this paper, new estimates of these parameters, called the spectral decom-position estimates, are proposed, Some important statistical properties of the new estimates are established,in particular the linearity of the estimates of the fixed effects with many statistical optimalities. A new methodis applied to two important models which are used in economics, finance, and mechanical fields. All estimatesobtained have good statistical and practical meaning.

  9. The Adaptive LASSO Spline Estimation of Single-Index Model

    Institute of Scientific and Technical Information of China (English)

    LU Yiqiang; ZHANG Riquan; HU Bin

    2016-01-01

    In this paper,based on spline approximation,the authors propose a unified variable selection approach for single-index model via adaptive L1 penalty.The calculation methods of the proposed estimators are given on the basis of the known lars algorithm.Under some regular conditions,the authors demonstrate the asymptotic properties of the proposed estimators and the oracle properties of adaptive LASSO (aLASSO) variable selection.Simulations are used to investigate the performances of the proposed estimator and illustrate that it is effective for simultaneous variable selection as well as estimation of the single-index models.

  10. Estimation methods for nonlinear state-space models in ecology

    DEFF Research Database (Denmark)

    Pedersen, Martin Wæver; Berg, Casper Willestofte; Thygesen, Uffe Høgsbro

    2011-01-01

    The use of nonlinear state-space models for analyzing ecological systems is increasing. A wide range of estimation methods for such models are available to ecologists, however it is not always clear, which is the appropriate method to choose. To this end, three approaches to estimation in the theta...... logistic model for population dynamics were benchmarked by Wang (2007). Similarly, we examine and compare the estimation performance of three alternative methods using simulated data. The first approach is to partition the state-space into a finite number of states and formulate the problem as a hidden...... Markov model (HMM). The second method uses the mixed effects modeling and fast numerical integration framework of the AD Model Builder (ADMB) open-source software. The third alternative is to use the popular Bayesian framework of BUGS. The study showed that state and parameter estimation performance...

  11. A Maximum Entropy Estimator for the Aggregate Hierarchical Logit Model

    Directory of Open Access Journals (Sweden)

    Pedro Donoso

    2011-08-01

    Full Text Available A new approach for estimating the aggregate hierarchical logit model is presented. Though usually derived from random utility theory assuming correlated stochastic errors, the model can also be derived as a solution to a maximum entropy problem. Under the latter approach, the Lagrange multipliers of the optimization problem can be understood as parameter estimators of the model. Based on theoretical analysis and Monte Carlo simulations of a transportation demand model, it is demonstrated that the maximum entropy estimators have statistical properties that are superior to classical maximum likelihood estimators, particularly for small or medium-size samples. The simulations also generated reduced bias in the estimates of the subjective value of time and consumer surplus.

  12. Estimation of shape model parameters for 3D surfaces

    DEFF Research Database (Denmark)

    Erbou, Søren Gylling Hemmingsen; Darkner, Sune; Fripp, Jurgen;

    2008-01-01

    Statistical shape models are widely used as a compact way of representing shape variation. Fitting a shape model to unseen data enables characterizing the data in terms of the model parameters. In this paper a Gauss-Newton optimization scheme is proposed to estimate shape model parameters of 3D s...

  13. An Estimated DSGE Model of the Indian Economy

    OpenAIRE

    2010-01-01

    We develop a closed-economy DSGE model of the Indian economy and estimate it by Bayesian Maximum Likelihood methods using Dynare. We build up in stages to a model with a number of features important for emerging economies in general and the Indian economy in particular: a large proportion of credit-constrained consumers, a financial accelerator facing domestic firms seeking to finance their investment, and an informal sector. The simulation properties of the estimated model are examined under...

  14. Explicit estimating equations for semiparametric generalized linear latent variable models

    KAUST Repository

    Ma, Yanyuan

    2010-07-05

    We study generalized linear latent variable models without requiring a distributional assumption of the latent variables. Using a geometric approach, we derive consistent semiparametric estimators. We demonstrate that these models have a property which is similar to that of a sufficient complete statistic, which enables us to simplify the estimating procedure and explicitly to formulate the semiparametric estimating equations. We further show that the explicit estimators have the usual root n consistency and asymptotic normality. We explain the computational implementation of our method and illustrate the numerical performance of the estimators in finite sample situations via extensive simulation studies. The advantage of our estimators over the existing likelihood approach is also shown via numerical comparison. We employ the method to analyse a real data example from economics. © 2010 Royal Statistical Society.

  15. Fundamental Frequency and Model Order Estimation Using Spatial Filtering

    DEFF Research Database (Denmark)

    Karimian-Azari, Sam; Jensen, Jesper Rindom; Christensen, Mads Græsbøll

    2014-01-01

    extend this procedure to account for inharmonicity using unconstrained model order estimation. The simulations show that beamforming improves the performance of the joint estimates of fundamental frequency and the number of harmonics in low signal to interference (SIR) levels, and an experiment......In signal processing applications of harmonic-structured signals, estimates of the fundamental frequency and number of harmonics are often necessary. In real scenarios, a desired signal is contaminated by different levels of noise and interferers, which complicate the estimation of the signal...... parameters. In this paper, we present an estimation procedure for harmonic-structured signals in situations with strong interference using spatial filtering, or beamforming. We jointly estimate the fundamental frequency and the constrained model order through the output of the beamformers. Besides that, we...

  16. INTERACTING MULTIPLE MODEL ALGORITHM BASED ON JOINT LIKELIHOOD ESTIMATION

    Institute of Scientific and Technical Information of China (English)

    Sun Jie; Jiang Chaoshu; Chen Zhuming; Zhang Wei

    2011-01-01

    A novel approach is proposed for the estimation of likelihood on Interacting Multiple-Model (IMM) filter.In this approach,the actual innovation,based on a mismatched model,can be formulated as sum of the theoretical innovation based on a matched model and the distance between matched and mismatched models,whose probability distributions are known.The joint likelihood of innovation sequence can be estimated by convolution of the two known probability density functions.The likelihood of tracking models can be calculated by conditional probability formula.Compared with the conventional likelihood estimation method,the proposed method improves the estimation accuracy of likelihood and robustness of IMM,especially when maneuver occurs.

  17. System Level Modelling and Performance Estimation of Embedded Systems

    DEFF Research Database (Denmark)

    Tranberg-Hansen, Anders Sejer

    is simulation based and allows performance estimation to be carried out throughout all design phases ranging from early functional to cycle accurate and bit true descriptions of the system, modelling both hardware and software components in a unied way. Design space exploration and performance estimation...... an efficient system level design methodology, a modelling framework for performance estimation and design space exploration at the system level is required. This thesis presents a novel component based modelling framework for system level modelling and performance estimation of embedded systems. The framework...... is performed by having the framework produce detailed quantitative information about the system model under investigation. The project is part of the national Danish research project, Danish Network of Embedded Systems (DaNES), which is funded by the Danish National Advanced Technology Foundation. The project...

  18. Gaussian estimation for discretely observed Cox-Ingersoll-Ross model

    Science.gov (United States)

    Wei, Chao; Shu, Huisheng; Liu, Yurong

    2016-07-01

    This paper is concerned with the parameter estimation problem for Cox-Ingersoll-Ross model based on discrete observation. First, a new discretized process is built based on the Euler-Maruyama scheme. Then, the parameter estimators are obtained by employing the maximum likelihood method and the explicit expressions of the error of estimation are given. Subsequently, the consistency property of all parameter estimators are proved by applying the law of large numbers for martingales, Holder's inequality, B-D-G inequality and Cauchy-Schwarz inequality. Finally, a numerical simulation example for estimators and the absolute error between estimators and true values is presented to demonstrate the effectiveness of the estimation approach used in this paper.

  19. Battery Calendar Life Estimator Manual Modeling and Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Jon P. Christophersen; Ira Bloom; Ed Thomas; Vince Battaglia

    2012-10-01

    The Battery Life Estimator (BLE) Manual has been prepared to assist developers in their efforts to estimate the calendar life of advanced batteries for automotive applications. Testing requirements and procedures are defined by the various manuals previously published under the United States Advanced Battery Consortium (USABC). The purpose of this manual is to describe and standardize a method for estimating calendar life based on statistical models and degradation data acquired from typical USABC battery testing.

  20. DR-model-based estimation algorithm for NCS

    Institute of Scientific and Technical Information of China (English)

    HUANG Si-niu; CHEN Zong-ji; WEI Chen

    2006-01-01

    A novel estimation scheme based on dead reckoning (DR) model for networked control system (NCS)is proposed in this paper.Both the detailed DR estimation algorithm and the stability analysis of the system are given.By using DR estimation of the state,the effect of communication delays is overcome.This makes a controller designed without considering delays still applicable in NCS Moreover,the scheme can effectively solve the problem of data packet loss or timeout.

  1. Reduced Noise Effect in Nonlinear Model Estimation Using Multiscale Representation

    Directory of Open Access Journals (Sweden)

    Mohamed N. Nounou

    2010-01-01

    Full Text Available Nonlinear process models are widely used in various applications. In the absence of fundamental models, it is usually relied on empirical models, which are estimated from measurements of the process variables. Unfortunately, measured data are usually corrupted with measurement noise that degrades the accuracy of the estimated models. Multiscale wavelet-based representation of data has been shown to be a powerful data analysis and feature extraction tool. In this paper, these characteristics of multiscale representation are utilized to improve the estimation accuracy of the linear-in-the-parameters nonlinear model by developing a multiscale nonlinear (MSNL modeling algorithm. The main idea in this MSNL modeling algorithm is to decompose the data at multiple scales, construct multiple nonlinear models at multiple scales, and then select among all scales the model which best describes the process. The main advantage of the developed algorithm is that it integrates modeling and feature extraction to improve the robustness of the estimated model to the presence of measurement noise in the data. This advantage of MSNL modeling is demonstrated using a nonlinear reactor model.

  2. ASYMPTOTIC EFFICIENT ESTIMATION IN SEMIPARAMETRIC NONLINEAR REGRESSION MODELS

    Institute of Scientific and Technical Information of China (English)

    ZhuZhongyi; WeiBocheng

    1999-01-01

    In this paper, the estimation method based on the “generalized profile likelihood” for the conditionally parametric models in the paper given by Severini and Wong (1992) is extendedto fixed design semiparametrie nonlinear regression models. For these semiparametrie nonlinear regression models,the resulting estimator of parametric component of the model is shown to beasymptotically efficient and the strong convergence rate of nonparametric component is investigated. Many results (for example Chen (1988) ,Gao & Zhao (1993), Rice (1986) et al. ) are extended to fixed design semiparametric nonlinear regression models.

  3. Linear Factor Models and the Estimation of Expected Returns

    NARCIS (Netherlands)

    Sarisoy, Cisil; de Goeij, Peter; Werker, Bas

    2015-01-01

    Estimating expected returns on individual assets or portfolios is one of the most fundamental problems of finance research. The standard approach, using historical averages,produces noisy estimates. Linear factor models of asset pricing imply a linear relationship between expected returns and exposu

  4. Parameter Estimation for a Computable General Equilibrium Model

    DEFF Research Database (Denmark)

    Arndt, Channing; Robinson, Sherman; Tarp, Finn

    2002-01-01

    We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of non-linear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...

  5. Person Appearance Modeling and Orientation Estimation using Spherical Harmonics

    NARCIS (Netherlands)

    Liem, M.C.; Gavrila, D.M.

    2013-01-01

    We present a novel approach for the joint estimation of a person's overall body orientation, 3D shape and texture, from overlapping cameras. Overall body orientation (i.e. rotation around torso major axis) is estimated by minimizing the difference between a learned texture model in a canonical orien

  6. Simulation model accurately estimates total dietary iodine intake

    NARCIS (Netherlands)

    Verkaik-Kloosterman, J.; Veer, van 't P.; Ocke, M.C.

    2009-01-01

    One problem with estimating iodine intake is the lack of detailed data about the discretionary use of iodized kitchen salt and iodization of industrially processed foods. To be able to take into account these uncertainties in estimating iodine intake, a simulation model combining deterministic and p

  7. A Framework for Non-Gaussian Signal Modeling and Estimation

    Science.gov (United States)

    1999-06-01

    the minimum entropy estimator," Trabajos de Estadistica , vol. 19, pp. 55-65, 1968. XI_ ILlllgl_____l)___11-_11^· -^_X II- _ -- _ _ . III·III...Nonparametric Function Estimation, Modeling, and Simulation. Philadelphia: Society for Industrial and Applied Mathematics, 1990. [200] D. M. Titterington

  8. A least squares estimation method for the linear learning model

    NARCIS (Netherlands)

    B. Wierenga (Berend)

    1978-01-01

    textabstractThe author presents a new method for estimating the parameters of the linear learning model. The procedure, essentially a least squares method, is easy to carry out and avoids certain difficulties of earlier estimation procedures. Applications to three different data sets are reported, a

  9. Trimmed Likelihood-based Estimation in Binary Regression Models

    NARCIS (Netherlands)

    Cizek, P.

    2005-01-01

    The binary-choice regression models such as probit and logit are typically estimated by the maximum likelihood method.To improve its robustness, various M-estimation based procedures were proposed, which however require bias corrections to achieve consistency and their resistance to outliers is rela

  10. Parameter Estimation for a Computable General Equilibrium Model

    DEFF Research Database (Denmark)

    Arndt, Channing; Robinson, Sherman; Tarp, Finn

    We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of nonlinear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...

  11. Change-point estimation for censored regression model

    Institute of Scientific and Technical Information of China (English)

    Zhan-feng WANG; Yao-hua WU; Lin-cheng ZHAO

    2007-01-01

    In this paper, we consider the change-point estimation in the censored regression model assuming that there exists one change point. A nonparametric estimate of the change-point is proposed and is shown to be strongly consistent. Furthermore, its convergence rate is also obtained.

  12. Estimation of Kinetic Parameters in an Automotive SCR Catalyst Model

    DEFF Research Database (Denmark)

    Åberg, Andreas; Widd, Anders; Abildskov, Jens;

    2016-01-01

    A challenge during the development of models for simulation of the automotive Selective Catalytic Reduction catalyst is the parameter estimation of the kinetic parameters, which can be time consuming and problematic. The parameter estimation is often carried out on small-scale reactor tests, or p...

  13. Parameter estimation of hydrologic models using data assimilation

    Science.gov (United States)

    Kaheil, Y. H.

    2005-12-01

    The uncertainties associated with the modeling of hydrologic systems sometimes demand that data should be incorporated in an on-line fashion in order to understand the behavior of the system. This paper represents a Bayesian strategy to estimate parameters for hydrologic models in an iterative mode. The paper presents a modified technique called localized Bayesian recursive estimation (LoBaRE) that efficiently identifies the optimum parameter region, avoiding convergence to a single best parameter set. The LoBaRE methodology is tested for parameter estimation for two different types of models: a support vector machine (SVM) model for predicting soil moisture, and the Sacramento Soil Moisture Accounting (SAC-SMA) model for estimating streamflow. The SAC-SMA model has 13 parameters that must be determined. The SVM model has three parameters. Bayesian inference is used to estimate the best parameter set in an iterative fashion. This is done by narrowing the sampling space by imposing uncertainty bounds on the posterior best parameter set and/or updating the "parent" bounds based on their fitness. The new approach results in fast convergence towards the optimal parameter set using minimum training/calibration data and evaluation of fewer parameter sets. The efficacy of the localized methodology is also compared with the previously used Bayesian recursive estimation (BaRE) algorithm.

  14. Stochastic magnetic measurement model for relative position and orientation estimation

    NARCIS (Netherlands)

    Schepers, H.M.; Veltink, P.H.

    2010-01-01

    This study presents a stochastic magnetic measurement model that can be used to estimate relative position and orientation. The model predicts the magnetic field generated by a single source coil at the location of the sensor. The model was used in a fusion filter that predicts the change of positio

  15. Stochastic magnetic measurement model for relative position and orientation estimation

    NARCIS (Netherlands)

    Schepers, H. Martin; Veltink, Petrus H.

    2010-01-01

    This study presents a stochastic magnetic measurement model that can be used to estimate relative position and orientation. The model predicts the magnetic field generated by a single source coil at the location of the sensor. The model was used in a fusion filter that predicts the change of positio

  16. Parameter Estimates in Differential Equation Models for Population Growth

    Science.gov (United States)

    Winkel, Brian J.

    2011-01-01

    We estimate the parameters present in several differential equation models of population growth, specifically logistic growth models and two-species competition models. We discuss student-evolved strategies and offer "Mathematica" code for a gradient search approach. We use historical (1930s) data from microbial studies of the Russian biologist,…

  17. Models for estimation of land remote sensing satellites operational efficiency

    Science.gov (United States)

    Kurenkov, Vladimir I.; Kucherov, Alexander S.

    2017-01-01

    The paper deals with the problem of estimation of land remote sensing satellites operational efficiency. Appropriate mathematical models have been developed. Some results obtained with the help of the software worked out in Delphi programming support environment are presented.

  18. Parameter Estimation for the Thurstone Case III Model.

    Science.gov (United States)

    Mackay, David B.; Chaiy, Seoil

    1982-01-01

    The ability of three estimation criteria to recover parameters of the Thurstone Case V and Case III models from comparative judgment data was investigated via Monte Carlo techniques. Significant differences in recovery are shown to exist. (Author/JKS)

  19. Allometric models for estimating biomass and carbon in Alnus acuminata

    National Research Council Canada - National Science Library

    William Fonseca; Laura Ruíz; Marylin Rojas; Federico Allice

    2013-01-01

    ... (leaves, branches, stem and root) and total tree biomass in Alnus acuminata (Kunth) in Costa Rica. Additionally, models were developed to estimate biomass and carbon in trees per hectare and for total plant biomass per hectare...

  20. Estimation of the Human Absorption Cross Section Via Reverberation Models

    DEFF Research Database (Denmark)

    Steinböck, Gerhard; Pedersen, Troels; Fleury, Bernard Henri;

    2016-01-01

    Since the presence of persons affects the reverberation time observed for in-room channels, the absorption cross section of a person can be estimated from measurements via Sabine's and Eyring's models for the reverberation time. We propose an estimator relying on the more accurate model by Eyring...... and compare the obtained results to those of Sabine's model. We find that the absorption by persons is large enough to be measured with a wideband channel sounder and that estimates of the human absorption cross section differ for the two models. The obtained values are comparable to values reported...... in the literature. We also suggest the use of controlled environments with low average absorption coefficients to obtain more reliable estimates. The obtained values can be used to predict the change of reverberation time with persons in the propagation environment. This allows prediction of channel characteristics...

  1. NEW DOCTORAL DEGREE Parameter estimation problem in the Weibull model

    OpenAIRE

    Marković, Darija

    2009-01-01

    In this dissertation we consider the problem of the existence of best parameters in the Weibull model, one of the most widely used statistical models in reliability theory and life data theory. Particular attention is given to a 3-parameter Weibull model. We have listed some of the many applications of this model. We have described some of the classical methods for estimating parameters of the Weibull model, two graphical methods (Weibull probability plot and hazard plot), and two analyt...

  2. ASYMPTOTICS OF MEAN TRANSFORMATION ESTIMATORS WITH ERRORS IN VARIABLES MODEL

    Institute of Scientific and Technical Information of China (English)

    CUI Hengjian

    2005-01-01

    This paper addresses estimation and its asymptotics of mean transformation θ = E[h(X)] of a random variable X based on n iid. Observations from errors-in-variables model Y = X + v, where v is a measurement error with a known distribution and h(.) is a known smooth function. The asymptotics of deconvolution kernel estimator for ordinary smooth error distribution and expectation extrapolation estimator are given for normal error distribution respectively. Under some mild regularity conditions, the consistency and asymptotically normality are obtained for both type of estimators. Simulations show they have good performance.

  3. ROBUST ESTIMATION IN PARTIAL LINEAR MIXED MODEL FOR LONGITUDINAL DATA

    Institute of Scientific and Technical Information of China (English)

    Qin Guoyou; Zhu Zhongyi

    2008-01-01

    In this article, robust generalized estimating equation for the analysis of par- tial linear mixed model for longitudinal data is used. The authors approximate the non- parametric function by a regression spline. Under some regular conditions, the asymptotic properties of the estimators are obtained. To avoid the computation of high-dimensional integral, a robust Monte Carlo Newton-Raphson algorithm is used. Some simulations are carried out to study the performance of the proposed robust estimators. In addition, the authors also study the robustness and the efficiency of the proposed estimators by simulation. Finally, two real longitudinal data sets are analyzed.

  4. Adaptive quasi-likelihood estimate in generalized linear models

    Institute of Scientific and Technical Information of China (English)

    CHEN Xia; CHEN Xiru

    2005-01-01

    This paper gives a thorough theoretical treatment on the adaptive quasilikelihood estimate of the parameters in the generalized linear models. The unknown covariance matrix of the response variable is estimated by the sample. It is shown that the adaptive estimator defined in this paper is asymptotically most efficient in the sense that it is asymptotic normal, and the covariance matrix of the limit distribution coincides with the one for the quasi-likelihood estimator for the case that the covariance matrix of the response variable is completely known.

  5. BAYESIAN ESTIMATION IN SHARED COMPOUND POISSON FRAILTY MODELS

    Directory of Open Access Journals (Sweden)

    David D. Hanagal

    2015-06-01

    Full Text Available In this paper, we study the compound Poisson distribution as the shared frailty distribution and two different baseline distributions namely Pareto and linear failure rate distributions for modeling survival data. We are using the Markov Chain Monte Carlo (MCMC technique to estimate parameters of the proposed models by introducing the Bayesian estimation procedure. In the present study, a simulation is done to compare the true values of parameters with the estimated values. We try to fit the proposed models to a real life bivariate survival data set of McGrilchrist and Aisbett (1991 related to kidney infection. Also, we present a comparison study for the same data by using model selection criterion, and suggest a better frailty model out of two proposed frailty models.

  6. Modeling of Location Estimation for Object Tracking in WSN

    Directory of Open Access Journals (Sweden)

    Hung-Chi Chu

    2013-01-01

    Full Text Available Location estimation for object tracking is one of the important topics in the research of wireless sensor networks (WSNs. Recently, many location estimation or position schemes in WSN have been proposed. In this paper, we will propose the procedure and modeling of location estimation for object tracking in WSN. The designed modeling is a simple scheme without complex processing. We will use Matlab to conduct the simulation and numerical analyses to find the optimal modeling variables. The analyses with different variables will include object moving model, sensing radius, model weighting value α, and power-level increasing ratio k of neighboring sensor nodes. For practical consideration, we will also carry out the shadowing model for analysis.

  7. CONSERVATIVE ESTIMATING FUNCTIONIN THE NONLINEAR REGRESSION MODEL WITHAGGREGATED DATA

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    The purpose of this paper is to study the theory of conservative estimating functions in nonlinear regression model with aggregated data. In this model, a quasi-score function with aggregated data is defined. When this function happens to be conservative, it is projection of the true score function onto a class of estimation functions. By constructing, the potential function for the projected score with aggregated data is obtained, which have some properties of log-likelihood function.

  8. Estimation linear model using block generalized inverse of a matrix

    OpenAIRE

    Jasińska, Elżbieta; Preweda, Edward

    2013-01-01

    The work shows the principle of generalized linear model, point estimation, which can be used as a basis for determining the status of movements and deformations of engineering objects. The structural model can be put on any boundary conditions, for example, to ensure the continuity of the deformations. Estimation by the method of least squares was carried out taking into account the terms and conditions of the Gauss- Markov for quadratic forms stored using Lagrange function. The original sol...

  9. Model Averaging Software for Dichotomous Dose Response Risk Estimation

    Directory of Open Access Journals (Sweden)

    Matthew W. Wheeler

    2008-02-01

    Full Text Available Model averaging has been shown to be a useful method for incorporating model uncertainty in quantitative risk estimation. In certain circumstances this technique is computationally complex, requiring sophisticated software to carry out the computation. We introduce software that implements model averaging for risk assessment based upon dichotomous dose-response data. This software, which we call Model Averaging for Dichotomous Response Benchmark Dose (MADr-BMD, fits the quantal response models, which are also used in the US Environmental Protection Agency benchmark dose software suite, and generates a model-averaged dose response model to generate benchmark dose and benchmark dose lower bound estimates. The software fulfills a need for risk assessors, allowing them to go beyond one single model in their risk assessments based on quantal data by focusing on a set of models that describes the experimental data.

  10. Efficient robust nonparametric estimation in a semimartingale regression model

    CERN Document Server

    Konev, Victor

    2010-01-01

    The paper considers the problem of robust estimating a periodic function in a continuous time regression model with dependent disturbances given by a general square integrable semimartingale with unknown distribution. An example of such a noise is non-gaussian Ornstein-Uhlenbeck process with the L\\'evy process subordinator, which is used to model the financial Black-Scholes type markets with jumps. An adaptive model selection procedure, based on the weighted least square estimates, is proposed. Under general moment conditions on the noise distribution, sharp non-asymptotic oracle inequalities for the robust risks have been derived and the robust efficiency of the model selection procedure has been shown.

  11. Maximum profile likelihood estimation of differential equation parameters through model based smoothing state estimates.

    Science.gov (United States)

    Campbell, D A; Chkrebtii, O

    2013-12-01

    Statistical inference for biochemical models often faces a variety of characteristic challenges. In this paper we examine state and parameter estimation for the JAK-STAT intracellular signalling mechanism, which exemplifies the implementation intricacies common in many biochemical inference problems. We introduce an extension to the Generalized Smoothing approach for estimating delay differential equation models, addressing selection of complexity parameters, choice of the basis system, and appropriate optimization strategies. Motivated by the JAK-STAT system, we further extend the generalized smoothing approach to consider a nonlinear observation process with additional unknown parameters, and highlight how the approach handles unobserved states and unevenly spaced observations. The methodology developed is generally applicable to problems of estimation for differential equation models with delays, unobserved states, nonlinear observation processes, and partially observed histories.

  12. Comparisons of Estimation Procedures for Nonlinear Multilevel Models

    Directory of Open Access Journals (Sweden)

    Ali Reza Fotouhi

    2003-05-01

    Full Text Available We introduce General Multilevel Models and discuss the estimation procedures that may be used to fit multilevel models. We apply the proposed procedures to three-level binary data generated in a simulation study. We compare the procedures by two criteria, Bias and efficiency. We find that the estimates of the fixed effects and variance components are substantially and significantly biased using Longford's Approximation and Goldstein's Generalized Least Squares approaches by two software packages VARCL and ML3. These estimates are not significantly biased and are very close to real values when we use Markov Chain Monte Carlo (MCMC using Gibbs sampling or Nonparametric Maximum Likelihood (NPML approach. The Gaussian Quadrature (GQ approach, even with small number of mass points results in consistent estimates but computationally problematic. We conclude that the MCMC and the NPML approaches are the recommended procedures to fit multilevel models.

  13. Estimation of Nonlinear Dynamic Panel Data Models with Individual Effects

    Directory of Open Access Journals (Sweden)

    Yi Hu

    2014-01-01

    Full Text Available This paper suggests a generalized method of moments (GMM based estimation for dynamic panel data models with individual specific fixed effects and threshold effects simultaneously. We extend Hansen’s (Hansen, 1999 original setup to models including endogenous regressors, specifically, lagged dependent variables. To address the problem of endogeneity of these nonlinear dynamic panel data models, we prove that the orthogonality conditions proposed by Arellano and Bond (1991 are valid. The threshold and slope parameters are estimated by GMM, and asymptotic distribution of the slope parameters is derived. Finite sample performance of the estimation is investigated through Monte Carlo simulations. It shows that the threshold and slope parameter can be estimated accurately and also the finite sample distribution of slope parameters is well approximated by the asymptotic distribution.

  14. Estimation of growth parameters using a nonlinear mixed Gompertz model.

    Science.gov (United States)

    Wang, Z; Zuidhof, M J

    2004-06-01

    In order to maximize the utility of simulation models for decision making, accurate estimation of growth parameters and associated variances is crucial. A mixed Gompertz growth model was used to account for between-bird variation and heterogeneous variance. The mixed model had several advantages over the fixed effects model. The mixed model partitioned BW variation into between- and within-bird variation, and the covariance structure assumed with the random effect accounted for part of the BW correlation across ages in the same individual. The amount of residual variance decreased by over 55% with the mixed model. The mixed model reduced estimation biases that resulted from selective sampling. For analysis of longitudinal growth data, the mixed effects growth model is recommended.

  15. Estimating hybrid choice models with the new version of Biogeme

    OpenAIRE

    Bierlaire, Michel

    2010-01-01

    Hybrid choice models integrate many types of discrete choice modeling methods, including latent classes and latent variables, in order to capture concepts such as perceptions, attitudes, preferences, and motivatio (Ben-Akiva et al., 2002). Although they provide an excellent framework to capture complex behavior patterns, their use in applications remains rare in the literature due to the difficulty of estimating the models. In this talk, we provide a short introduction to hybrid choice model...

  16. Parameter Estimation and Experimental Design in Groundwater Modeling

    Institute of Scientific and Technical Information of China (English)

    SUN Ne-zheng

    2004-01-01

    This paper reviews the latest developments on parameter estimation and experimental design in the field of groundwater modeling. Special considerations are given when the structure of the identified parameter is complex and unknown. A new methodology for constructing useful groundwater models is described, which is based on the quantitative relationships among the complexity of model structure, the identifiability of parameter, the sufficiency of data, and the reliability of model application.

  17. Activity Recognition Using Biomechanical Model Based Pose Estimation

    OpenAIRE

    Reiss, Attila; Hendeby, Gustaf; Bleser, Gabriele; Stricker, Didier

    2010-01-01

    In this paper, a novel activity recognition method based on signal-oriented and model-based features is presented. The model-based features are calculated from shoulder and elbow joint angles and torso orientation, provided by upper-body pose estimation based on a biomechanical body model. The recognition performance of signal-oriented and model-based features is compared within this paper, and the potential of improving recognition accuracy by combining the two approaches is proved: the accu...

  18. Bayesian estimation of parameters in a regional hydrological model

    Directory of Open Access Journals (Sweden)

    K. Engeland

    2002-01-01

    Full Text Available This study evaluates the applicability of the distributed, process-oriented Ecomag model for prediction of daily streamflow in ungauged basins. The Ecomag model is applied as a regional model to nine catchments in the NOPEX area, using Bayesian statistics to estimate the posterior distribution of the model parameters conditioned on the observed streamflow. The distribution is calculated by Markov Chain Monte Carlo (MCMC analysis. The Bayesian method requires formulation of a likelihood function for the parameters and three alternative formulations are used. The first is a subjectively chosen objective function that describes the goodness of fit between the simulated and observed streamflow, as defined in the GLUE framework. The second and third formulations are more statistically correct likelihood models that describe the simulation errors. The full statistical likelihood model describes the simulation errors as an AR(1 process, whereas the simple model excludes the auto-regressive part. The statistical parameters depend on the catchments and the hydrological processes and the statistical and the hydrological parameters are estimated simultaneously. The results show that the simple likelihood model gives the most robust parameter estimates. The simulation error may be explained to a large extent by the catchment characteristics and climatic conditions, so it is possible to transfer knowledge about them to ungauged catchments. The statistical models for the simulation errors indicate that structural errors in the model are more important than parameter uncertainties. Keywords: regional hydrological model, model uncertainty, Bayesian analysis, Markov Chain Monte Carlo analysis

  19. Comparing interval estimates for small sample ordinal CFA models.

    Science.gov (United States)

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading

  20. Hydrological model uncertainty due to spatial evapotranspiration estimation methods

    Science.gov (United States)

    Yu, Xuan; Lamačová, Anna; Duffy, Christopher; Krám, Pavel; Hruška, Jakub

    2016-05-01

    Evapotranspiration (ET) continues to be a difficult process to estimate in seasonal and long-term water balances in catchment models. Approaches to estimate ET typically use vegetation parameters (e.g., leaf area index [LAI], interception capacity) obtained from field observation, remote sensing data, national or global land cover products, and/or simulated by ecosystem models. In this study we attempt to quantify the uncertainty that spatial evapotranspiration estimation introduces into hydrological simulations when the age of the forest is not precisely known. The Penn State Integrated Hydrologic Model (PIHM) was implemented for the Lysina headwater catchment, located 50°03‧N, 12°40‧E in the western part of the Czech Republic. The spatial forest patterns were digitized from forest age maps made available by the Czech Forest Administration. Two ET methods were implemented in the catchment model: the Biome-BGC forest growth sub-model (1-way coupled to PIHM) and with the fixed-seasonal LAI method. From these two approaches simulation scenarios were developed. We combined the estimated spatial forest age maps and two ET estimation methods to drive PIHM. A set of spatial hydrologic regime and streamflow regime indices were calculated from the modeling results for each method. Intercomparison of the hydrological responses to the spatial vegetation patterns suggested considerable variation in soil moisture and recharge and a small uncertainty in the groundwater table elevation and streamflow. The hydrologic modeling with ET estimated by Biome-BGC generated less uncertainty due to the plant physiology-based method. The implication of this research is that overall hydrologic variability induced by uncertain management practices was reduced by implementing vegetation models in the catchment models.

  1. Remodeling and Estimation for Sparse Partially Linear Regression Models

    Directory of Open Access Journals (Sweden)

    Yunhui Zeng

    2013-01-01

    Full Text Available When the dimension of covariates in the regression model is high, one usually uses a submodel as a working model that contains significant variables. But it may be highly biased and the resulting estimator of the parameter of interest may be very poor when the coefficients of removed variables are not exactly zero. In this paper, based on the selected submodel, we introduce a two-stage remodeling method to get the consistent estimator for the parameter of interest. More precisely, in the first stage, by a multistep adjustment, we reconstruct an unbiased model based on the correlation information between the covariates; in the second stage, we further reduce the adjusted model by a semiparametric variable selection method and get a new estimator of the parameter of interest simultaneously. Its convergence rate and asymptotic normality are also obtained. The simulation results further illustrate that the new estimator outperforms those obtained by the submodel and the full model in the sense of mean square errors of point estimation and mean square prediction errors of model prediction.

  2. Maximum likelihood estimation for semiparametric density ratio model.

    Science.gov (United States)

    Diao, Guoqing; Ning, Jing; Qin, Jing

    2012-06-27

    In the statistical literature, the conditional density model specification is commonly used to study regression effects. One attractive model is the semiparametric density ratio model, under which the conditional density function is the product of an unknown baseline density function and a known parametric function containing the covariate information. This model has a natural connection with generalized linear models and is closely related to biased sampling problems. Despite the attractive features and importance of this model, most existing methods are too restrictive since they are based on multi-sample data or conditional likelihood functions. The conditional likelihood approach can eliminate the unknown baseline density but cannot estimate it. We propose efficient estimation procedures based on the nonparametric likelihood. The nonparametric likelihood approach allows for general forms of covariates and estimates the regression parameters and the baseline density simultaneously. Therefore, the nonparametric likelihood approach is more versatile than the conditional likelihood approach especially when estimation of the conditional mean or other quantities of the outcome is of interest. We show that the nonparametric maximum likelihood estimators are consistent, asymptotically normal, and asymptotically efficient. Simulation studies demonstrate that the proposed methods perform well in practical settings. A real example is used for illustration.

  3. Estimating Tree Height-Diameter Models with the Bayesian Method

    Directory of Open Access Journals (Sweden)

    Xiongqing Zhang

    2014-01-01

    Full Text Available Six candidate height-diameter models were used to analyze the height-diameter relationships. The common methods for estimating the height-diameter models have taken the classical (frequentist approach based on the frequency interpretation of probability, for example, the nonlinear least squares method (NLS and the maximum likelihood method (ML. The Bayesian method has an exclusive advantage compared with classical method that the parameters to be estimated are regarded as random variables. In this study, the classical and Bayesian methods were used to estimate six height-diameter models, respectively. Both the classical method and Bayesian method showed that the Weibull model was the “best” model using data1. In addition, based on the Weibull model, data2 was used for comparing Bayesian method with informative priors with uninformative priors and classical method. The results showed that the improvement in prediction accuracy with Bayesian method led to narrower confidence bands of predicted value in comparison to that for the classical method, and the credible bands of parameters with informative priors were also narrower than uninformative priors and classical method. The estimated posterior distributions for parameters can be set as new priors in estimating the parameters using data2.

  4. Application of variance components estimation to calibrate geoid error models.

    Science.gov (United States)

    Guo, Dong-Mei; Xu, Hou-Ze

    2015-01-01

    The method of using Global Positioning System-leveling data to obtain orthometric heights has been well studied. A simple formulation for the weighted least squares problem has been presented in an earlier work. This formulation allows one directly employing the errors-in-variables models which completely descript the covariance matrices of the observables. However, an important question that what accuracy level can be achieved has not yet to be satisfactorily solved by this traditional formulation. One of the main reasons for this is the incorrectness of the stochastic models in the adjustment, which in turn allows improving the stochastic models of measurement noises. Therefore the issue of determining the stochastic modeling of observables in the combined adjustment with heterogeneous height types will be a main focus point in this paper. Firstly, the well-known method of variance component estimation is employed to calibrate the errors of heterogeneous height data in a combined least square adjustment of ellipsoidal, orthometric and gravimetric geoid. Specifically, the iterative algorithms of minimum norm quadratic unbiased estimation are used to estimate the variance components for each of heterogeneous observations. Secondly, two different statistical models are presented to illustrate the theory. The first method directly uses the errors-in-variables as a priori covariance matrices and the second method analyzes the biases of variance components and then proposes bias-corrected variance component estimators. Several numerical test results show the capability and effectiveness of the variance components estimation procedure in combined adjustment for calibrating geoid error model.

  5. Marginal Maximum Likelihood Estimation of Item Response Models in R

    Directory of Open Access Journals (Sweden)

    Matthew S. Johnson

    2007-02-01

    Full Text Available Item response theory (IRT models are a class of statistical models used by researchers to describe the response behaviors of individuals to a set of categorically scored items. The most common IRT models can be classified as generalized linear fixed- and/or mixed-effect models. Although IRT models appear most often in the psychological testing literature, researchers in other fields have successfully utilized IRT-like models in a wide variety of applications. This paper discusses the three major methods of estimation in IRT and develops R functions utilizing the built-in capabilities of the R environment to find the marginal maximum likelihood estimates of the generalized partial credit model. The currently available R packages ltm is also discussed.

  6. Estimation of the Thurstonian model for the 2-AC protocol

    DEFF Research Database (Denmark)

    Christensen, Rune Haubo Bojesen; Lee, Hye-Seong; Brockhoff, Per B.

    2012-01-01

    The 2-AC protocol is a 2-AFC protocol with a “no-difference” option and is technically identical to the paired preference test with a “no-preference” option. The Thurstonian model for the 2-AC protocol is parameterized by δ and a decision parameter τ, the estimates of which can be obtained...... by fairly simple well-known methods. In this paper we describe how standard errors of the parameters can be obtained and how exact power computations can be performed. We also show how the Thurstonian model for the 2-AC protocol is closely related to a statistical model known as a cumulative probit model....... This relationship makes it possible to extract estimates and standard errors of δ and τ from general statistical software, and furthermore, it makes it possible to combine standard regression modelling with the Thurstonian model for the 2-AC protocol. A model for replicated 2-AC data is proposed using cumulative...

  7. Estimation of pump operational state with model-based methods

    Energy Technology Data Exchange (ETDEWEB)

    Ahonen, Tero; Tamminen, Jussi; Ahola, Jero; Viholainen, Juha; Aranto, Niina [Institute of Energy Technology, Lappeenranta University of Technology, P.O. Box 20, FI-53851 Lappeenranta (Finland); Kestilae, Juha [ABB Drives, P.O. Box 184, FI-00381 Helsinki (Finland)

    2010-06-15

    Pumps are widely used in industry, and they account for 20% of the industrial electricity consumption. Since the speed variation is often the most energy-efficient method to control the head and flow rate of a centrifugal pump, frequency converters are used with induction motor-driven pumps. Although a frequency converter can estimate the operational state of an induction motor without external measurements, the state of a centrifugal pump or other load machine is not typically considered. The pump is, however, usually controlled on the basis of the required flow rate or output pressure. As the pump operational state can be estimated with a general model having adjustable parameters, external flow rate or pressure measurements are not necessary to determine the pump flow rate or output pressure. Hence, external measurements could be replaced with an adjustable model for the pump that uses estimates of the motor operational state. Besides control purposes, modelling the pump operation can provide useful information for energy auditing and optimization purposes. In this paper, two model-based methods for pump operation estimation are presented. Factors affecting the accuracy of the estimation methods are analyzed. The applicability of the methods is verified by laboratory measurements and tests in two pilot installations. Test results indicate that the estimation methods can be applied to the analysis and control of pump operation. The accuracy of the methods is sufficient for auditing purposes, and the methods can inform the user if the pump is driven inefficiently. (author)

  8. Tube-Load Model Parameter Estimation for Monitoring Arterial Hemodynamics

    Directory of Open Access Journals (Sweden)

    Guanqun eZhang

    2011-11-01

    Full Text Available A useful model of the arterial system is the uniform, lossless tube with parametric load. This tube-load model is able to account for wave propagation and reflection (unlike lumped-parameter models such as the Windkessel while being defined by only a few parameters (unlike comprehensive distributed-parameter models. As a result, the parameters may be readily estimated by accurate fitting of the model to available arterial pressure and flow waveforms so as to permit improved monitoring of arterial hemodynamics. In this paper, we review tube-load model parameter estimation techniques that have appeared in the literature for monitoring wave reflection, large artery compliance, pulse transit time, and central aortic pressure. We begin by motivating the use of the tube-load model for parameter estimation. We then describe the tube-load model, its assumptions and validity, and approaches for estimating its parameters. We next summarize the various techniques and their experimental results while highlighting their advantages over conventional techniques. We conclude the review by suggesting future research directions and describing potential applications.

  9. Estimating Independent Locally Shifted Random Utility Models for Ranking Data

    Science.gov (United States)

    Lam, Kar Yin; Koning, Alex J.; Franses, Philip Hans

    2011-01-01

    We consider the estimation of probabilistic ranking models in the context of conjoint experiments. By using approximate rather than exact ranking probabilities, we avoided the computation of high-dimensional integrals. We extended the approximation technique proposed by Henery (1981) in the context of the Thurstone-Mosteller-Daniels model to any…

  10. Maximum Likelihood Estimation in Meta-Analytic Structural Equation Modeling

    Science.gov (United States)

    Oort, Frans J.; Jak, Suzanne

    2016-01-01

    Meta-analytic structural equation modeling (MASEM) involves fitting models to a common population correlation matrix that is estimated on the basis of correlation coefficients that are reported by a number of independent studies. MASEM typically consist of two stages. The method that has been found to perform best in terms of statistical…

  11. Estimating Dynamic Models from Repeated Cross-Sections

    NARCIS (Netherlands)

    Verbeek, M.J.C.M.; Vella, F.

    2000-01-01

    A major attraction of panel data is the ability to estimate dynamic models on an individual level. Moffitt (1993) and Collado (1998) have argued that such models can also be identified from repeated cross-section data. In this paper we reconsider this issue. We review the identification conditions u

  12. Tube-Load Model Parameter Estimation for Monitoring Arterial Hemodynamics

    Science.gov (United States)

    Zhang, Guanqun; Hahn, Jin-Oh; Mukkamala, Ramakrishna

    2011-01-01

    A useful model of the arterial system is the uniform, lossless tube with parametric load. This tube-load model is able to account for wave propagation and reflection (unlike lumped-parameter models such as the Windkessel) while being defined by only a few parameters (unlike comprehensive distributed-parameter models). As a result, the parameters may be readily estimated by accurate fitting of the model to available arterial pressure and flow waveforms so as to permit improved monitoring of arterial hemodynamics. In this paper, we review tube-load model parameter estimation techniques that have appeared in the literature for monitoring wave reflection, large artery compliance, pulse transit time, and central aortic pressure. We begin by motivating the use of the tube-load model for parameter estimation. We then describe the tube-load model, its assumptions and validity, and approaches for estimating its parameters. We next summarize the various techniques and their experimental results while highlighting their advantages over conventional techniques. We conclude the review by suggesting future research directions and describing potential applications. PMID:22053157

  13. Estimation of an Occupational Choice Model when Occupations Are Misclassified

    Science.gov (United States)

    Sullivan, Paul

    2009-01-01

    This paper develops an empirical occupational choice model that corrects for misclassification in occupational choices and measurement error in occupation-specific work experience. The model is used to estimate the extent of measurement error in occupation data and quantify the bias that results from ignoring measurement error in occupation codes…

  14. Contributions in Radio Channel Sounding, Modeling, and Estimation

    DEFF Research Database (Denmark)

    Pedersen, Troels

    2009-01-01

    the necessary and sufficient conditions for  spatio-temporal apertures to minimize the Cramer-Rao lower bound on the joint bi-direction and Doppler frequency estimation. The spatio-temporal aperture also impacts on the accuracy of MIMO-capacity estimation from measurements impaired by colored phase noise. We......, than corresponding results from literature. These findings indicate that the per-path directional spreads (or cluster spreads) assumed in standard models are set too large. Finally, we propose a model of the specular-to-diffuse transition observed in measurements of reverberant channels.  The model...

  15. Estimation of the input parameters in the Feller neuronal model

    Science.gov (United States)

    Ditlevsen, Susanne; Lansky, Petr

    2006-06-01

    The stochastic Feller neuronal model is studied, and estimators of the model input parameters, depending on the firing regime of the process, are derived. Closed expressions for the first two moments of functionals of the first-passage time (FTP) through a constant boundary in the suprathreshold regime are derived, which are used to calculate moment estimators. In the subthreshold regime, the exponentiality of the FTP is utilized to characterize the input parameters. The methods are illustrated on simulated data. Finally, approximations of the first-passage-time moments are suggested, and biological interpretations and comparisons of the parameters in the Feller and the Ornstein-Uhlenbeck models are discussed.

  16. Crosstalk Model and Estimation Formula for VLSI Interconnect Wires

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    We develop an interconnect crosstalk estimation model on the assumption of linearity for CMOS device. First, we analyze the terminal response of RC model on the worst condition from the S field to the time domain. The exact 3 order coefficients in S field are obtained due to the interconnect tree model. Based on this, a crosstalk peak estimation formula is presented. Unlike other crosstalk equations in the literature, this formula is only used coupled capacitance and grand capacitance as parameter. Experimental results show that, compared with the SPICE results, the estimation formulae are simple and accurate. So the model is expected to be used in such fields as layout-driven logic and high level synthesis, performance-driven floorplanning and interconnect planning.

  17. Dynamic Load Model using PSO-Based Parameter Estimation

    Science.gov (United States)

    Taoka, Hisao; Matsuki, Junya; Tomoda, Michiya; Hayashi, Yasuhiro; Yamagishi, Yoshio; Kanao, Norikazu

    This paper presents a new method for estimating unknown parameters of dynamic load model as a parallel composite of a constant impedance load and an induction motor behind a series constant reactance. An adequate dynamic load model is essential for evaluating power system stability, and this model can represent the behavior of actual load by using appropriate parameters. However, the problem of this model is that a lot of parameters are necessary and it is not easy to estimate a lot of unknown parameters. We propose an estimating method based on Particle Swarm Optimization (PSO) which is a non-linear optimization method by using the data of voltage, active power and reactive power measured at voltage sag.

  18. Model-based approach for elevator performance estimation

    Science.gov (United States)

    Esteban, E.; Salgado, O.; Iturrospe, A.; Isasa, I.

    2016-02-01

    In this paper, a dynamic model for an elevator installation is presented in the state space domain. The model comprises both the mechanical and the electrical subsystems, including the electrical machine and a closed-loop field oriented control. The proposed model is employed for monitoring the condition of the elevator installation. The adopted model-based approach for monitoring employs the Kalman filter as an observer. A Kalman observer estimates the elevator car acceleration, which determines the elevator ride quality, based solely on the machine control signature and the encoder signal. Finally, five elevator key performance indicators are calculated based on the estimated car acceleration. The proposed procedure is experimentally evaluated, by comparing the key performance indicators calculated based on the estimated car acceleration and the values obtained from actual acceleration measurements in a test bench. Finally, the proposed procedure is compared with the sliding mode observer.

  19. GLUE Based Uncertainty Estimation of Urban Drainage Modeling Using Weather Radar Precipitation Estimates

    DEFF Research Database (Denmark)

    Nielsen, Jesper Ellerbæk; Thorndahl, Søren Liedtke; Rasmussen, Michael R.

    2011-01-01

    Distributed weather radar precipitation measurements are used as rainfall input for an urban drainage model, to simulate the runoff from a small catchment of Denmark. It is demonstrated how the Generalized Likelihood Uncertainty Estimation (GLUE) methodology can be implemented and used to estimate...... the uncertainty of the weather radar rainfall input. The main findings of this work, is that the input uncertainty propagate through the urban drainage model with significant effects on the model result. The GLUE methodology is in general a usable way to explore this uncertainty although; the exact width...... of the prediction bands can be questioned, due to the subjective nature of the method. Moreover, the method also gives very useful information about the model and parameter behaviour....

  20. Modeling, Estimation, and Control of Helicopter Slung Load System

    DEFF Research Database (Denmark)

    Bisgaard, Morten

    This thesis treats the subject of autonomous helicopter slung load flight and presents the reader with a methodology describing the development path from modeling and system analysis over sensor fusion and state estimation to controller synthesis. The focus is directed along two different....... To enable slung load flight capabilities for general cargo transport, an integrated estimation and control system is developed for use on already autonomous helicopters. The estimator uses vision based updates only and needs little prior knowledge of the slung load system as it estimates the length...... of the suspension system together with the system states. The controller uses a combined feedforward and feedback approach to simultaneously prevent exciting swing and to actively dampen swing in the slung load. For the mine detection application an estimator is developed that provides full system state information...

  1. Parameter estimation for stochastic hybrid model applied to urban traffic flow estimation

    OpenAIRE

    2015-01-01

    This study proposes a novel data-based approach for estimating the parameters of a stochastic hybrid model describing the traffic flow in an urban traffic network with signalized intersections. The model represents the evolution of the traffic flow rate, measuring the number of vehicles passing a given location per time unit. This traffic flow rate is described using a mode-dependent first-order autoregressive (AR) stochastic process. The parameters of the AR process take different values dep...

  2. GEOMETRIC METHOD OF SEQUENTIAL ESTIMATION RELATED TO MULTINOMIAL DISTRIBUTION MODELS

    Institute of Scientific and Technical Information of China (English)

    WEIBOCHENG; LISHOUYE

    1995-01-01

    In 1980's differential geometric methods are successfully used to study curved expomential families and normal nonlinear regression models.This paper presents a new geometric structure to study multinomial distribution models which contain a set of nonlinear parameters.Based on this geometric structure,the suthors study several asymptotic properties for sequential estimation.The bias,the variance and the information loss of the sequential estimates are given from geomentric viewpoint,and a limit theorem connected with the observed and expected Fisher information is obtained in terms of curvatvre measures.The results show that the sequential estimation procednce has some better properties which are generally impossible for nonsequential estimation procedures.

  3. The problematic estimation of "imitation effects" in multilevel models

    Directory of Open Access Journals (Sweden)

    2003-09-01

    Full Text Available It seems plausible that a person's demographic behaviour may be influenced by that among other people in the community, for example because of an inclination to imitate. When estimating multilevel models from clustered individual data, some investigators might perhaps feel tempted to try to capture this effect by simply including on the right-hand side the average of the dependent variable, constructed by aggregation within the clusters. However, such modelling must be avoided. According to simulation experiments based on real fertility data from India, the estimated effect of this obviously endogenous variable can be very different from the true effect. Also the other community effect estimates can be strongly biased. An "imitation effect" can only be estimated under very special assumptions that in practice will be hard to defend.

  4. Model for Estimation Urban Transportation Supply-Demand Ratio

    Directory of Open Access Journals (Sweden)

    Chaoqun Wu

    2015-01-01

    Full Text Available The paper establishes an estimation model of urban transportation supply-demand ratio (TSDR to quantitatively describe the conditions of an urban transport system and to support a theoretical basis for transport policy-making. This TSDR estimation model is supported by the system dynamic principle and the VENSIM (an application that simulates the real system. It was accomplished by long-term observation of eight cities’ transport conditions and by analyzing the estimated results of TSDR from fifteen sets of refined data. The estimated results indicate that an urban TSDR can be classified into four grades representing four transport conditions: “scarce supply,” “short supply,” “supply-demand balance,” and “excess supply.” These results imply that transport policies or measures can be quantified to facilitate the process of ordering and screening them.

  5. Development on electromagnetic impedance function modeling and its estimation

    Energy Technology Data Exchange (ETDEWEB)

    Sutarno, D., E-mail: Sutarno@fi.itb.ac.id [Earth Physics and Complex System Division Faculty of Mathematics and Natural Sciences Institut Teknologi Bandung (Indonesia)

    2015-09-30

    Today the Electromagnetic methods such as magnetotellurics (MT) and controlled sources audio MT (CSAMT) is used in a broad variety of applications. Its usefulness in poor seismic areas and its negligible environmental impact are integral parts of effective exploration at minimum cost. As exploration was forced into more difficult areas, the importance of MT and CSAMT, in conjunction with other techniques, has tended to grow continuously. However, there are obviously important and difficult problems remaining to be solved concerning our ability to collect process and interpret MT as well as CSAMT in complex 3D structural environments. This talk aim at reviewing and discussing the recent development on MT as well as CSAMT impedance functions modeling, and also some improvements on estimation procedures for the corresponding impedance functions. In MT impedance modeling, research efforts focus on developing numerical method for computing the impedance functions of three dimensionally (3-D) earth resistivity models. On that reason, 3-D finite elements numerical modeling for the impedances is developed based on edge element method. Whereas, in the CSAMT case, the efforts were focused to accomplish the non-plane wave problem in the corresponding impedance functions. Concerning estimation of MT and CSAMT impedance functions, researches were focused on improving quality of the estimates. On that objective, non-linear regression approach based on the robust M-estimators and the Hilbert transform operating on the causal transfer functions, were used to dealing with outliers (abnormal data) which are frequently superimposed on a normal ambient MT as well as CSAMT noise fields. As validated, the proposed MT impedance modeling method gives acceptable results for standard three dimensional resistivity models. Whilst, the full solution based modeling that accommodate the non-plane wave effect for CSAMT impedances is applied for all measurement zones, including near-, transition

  6. [Hyperspectral estimation models of chlorophyll content in apple leaves].

    Science.gov (United States)

    Liang, Shuang; Zhao, Geng-xing; Zhu, Xi-cun

    2012-05-01

    The present study chose the apple orchard of Shandong Agricultural University as the study area to explore the method of apple leaf chlorophyll content estimation by hyperspectral analysis technology. Through analyzing the characteristics of apple leaves' hyperspectral curve, transforming the original spectral into first derivative, red edge position and leaf chlorophyll index (LCI) respectively, and making the correlation analysis and regression analysis of these variables with the chlorophyll content to establish the estimation models and test to select the high fitting precision models. Results showed that the fitting precision of the estimation model with variable of LCI and the estimation model with variable of the first derivative in the band of 521 and 523 nm was the highest. The coefficients of determination R2 were 0.845 and 0.839, the root mean square errors RMSE were 2.961 and 2.719, and the relative errors RE% were 4.71% and 4.70%, respectively. Therefore LCI and the first derivative are the important index for apple leaf chlorophyll content estimation. The models have positive significance to guide the production of apple cultivation.

  7. Parameter estimation and investigation of a bolted joint model

    Science.gov (United States)

    Shiryayev, O. V.; Page, S. M.; Pettit, C. L.; Slater, J. C.

    2007-11-01

    Mechanical joints are a primary source of variability in the dynamics of built-up structures. Physical phenomena in the joint are quite complex and therefore too impractical to model at the micro-scale. This motivates the development of lumped parameter joint models with discrete interfaces so that they can be easily implemented in finite element codes. Among the most important considerations in choosing a model for dynamically excited systems is its ability to model energy dissipation. This translates into the need for accurate and reliable methods to measure model parameters and estimate their inherent variability from experiments. The adjusted Iwan model was identified as a promising candidate for representing joint dynamics. Recent research focused on this model has exclusively employed impulse excitation in conjunction with neural networks to identify the model parameters. This paper presents an investigation of an alternative parameter estimation approach for the adjusted Iwan model, which employs data from oscillatory forcing. This approach is shown to produce parameter estimates with precision similar to the impulse excitation method for a range of model parameters.

  8. Combined Estimation of Hydrogeologic Conceptual Model and Parameter Uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, Philip D.; Ye, Ming; Neuman, Shlomo P.; Cantrell, Kirk J.

    2004-03-01

    The objective of the research described in this report is the development and application of a methodology for comprehensively assessing the hydrogeologic uncertainties involved in dose assessment, including uncertainties associated with conceptual models, parameters, and scenarios. This report describes and applies a statistical method to quantitatively estimate the combined uncertainty in model predictions arising from conceptual model and parameter uncertainties. The method relies on model averaging to combine the predictions of a set of alternative models. Implementation is driven by the available data. When there is minimal site-specific data the method can be carried out with prior parameter estimates based on generic data and subjective prior model probabilities. For sites with observations of system behavior (and optionally data characterizing model parameters), the method uses model calibration to update the prior parameter estimates and model probabilities based on the correspondence between model predictions and site observations. The set of model alternatives can contain both simplified and complex models, with the requirement that all models be based on the same set of data. The method was applied to the geostatistical modeling of air permeability at a fractured rock site. Seven alternative variogram models of log air permeability were considered to represent data from single-hole pneumatic injection tests in six boreholes at the site. Unbiased maximum likelihood estimates of variogram and drift parameters were obtained for each model. Standard information criteria provided an ambiguous ranking of the models, which would not justify selecting one of them and discarding all others as is commonly done in practice. Instead, some of the models were eliminated based on their negligibly small updated probabilities and the rest were used to project the measured log permeabilities by kriging onto a rock volume containing the six boreholes. These four

  9. [Selection of biomass estimation models for Chinese fir plantation].

    Science.gov (United States)

    Li, Yan; Zhang, Jian-guo; Duan, Ai-guo; Xiang, Cong-wei

    2010-12-01

    A total of 11 kinds of biomass models were adopted to estimate the biomass of single tree and its organs in young (7-year-old), middle-age (16-year-old), mature (28-year-old), and mixed-age Chinese fir plantations. There were totally 308 biomass models fitted. Among the 11 kinds of biomass models, power function models fitted best, followed by exponential models, and then polynomial models. Twenty-one optimal biomass models for individual organ and single tree were chosen, including 18 models for individual organ and 3 models for single tree. There were 7 optimal biomass models for the single tree in the mixed-age plantation, containing 6 for individual organ and 1 for single tree, and all in the form of power function. The optimal biomass models for the single tree in different age plantations had poor generality, but the ones for that in mixed-age plantation had a certain generality with high accuracy, which could be used for estimating the biomass of single tree in different age plantations. The optimal biomass models for single Chinese fir tree in Shaowu of Fujin Province were used to predict the single tree biomass in mature (28-year-old) Chinese fir plantation in Jiangxi Province, and it was found that the models based on a large sample of forest biomass had a relatively high accuracy, being able to be applied in large area, whereas the regional models with small sample were limited to small area.

  10. Estimating degree day factors from MODIS for snowmelt runoff modeling

    Directory of Open Access Journals (Sweden)

    Z. H. He

    2014-07-01

    Full Text Available Degree-day factors are widely used to estimate snowmelt runoff in operational hydrological models. Usually, they are calibrated on observed runoff, and sometimes on satellite snow cover data. In this paper, we propose a new method for estimating the snowmelt degree-day factor (DDFS directly from MODIS snow covered area (SCA and ground based snow depth data without calibration. Subcatchment snow volume is estimated by combining SCA and snow depths. Snow density is estimated as the ratio of observed precipitation and changes in the snow volume for days with snow accumulation. Finally, DDFS values are estimated as the ratio of changes in the snow water equivalent and degree-day temperatures for days with snow melt. We compare simulations of basin runoff and snow cover patterns using spatially variable DDFS estimated from snow data with those using spatially uniform DDFS calibrated on runoff. The runoff performances using estimated DDFS are slightly improved, and the simulated snow cover patterns are significantly more plausible. The new method may help reduce some of the runoff model parameter uncertainty by reducing the total number of calibration parameters.

  11. Perspectives on Modelling BIM-enabled Estimating Practices

    Directory of Open Access Journals (Sweden)

    Willy Sher

    2014-12-01

    Full Text Available BIM-enabled estimating processes do not replace or provide a substitute for the traditional approaches used in the architecture, engineering and construction industries. This paper explores the impact of BIM on these traditional processes.  It identifies differences between the approaches used with BIM and other conventional methods, and between the various construction professionals that prepare estimates. We interviewed 17 construction professionals from client organizations, contracting organizations, consulting practices and specialist-project firms. Our analyses highlight several logical relationships between estimating processes and BIM attributes. Estimators need to respond to the challenges BIM poses to traditional estimating practices. BIM-enabled estimating circumvents long-established conventions and traditional approaches, and focuses on data management.  Consideration needs to be given to the model data required for estimating, to the means by which these data may be harnessed when exported, to the means by which the integrity of model data are protected, to the creation and management of tools that work effectively and efficiently in multi-disciplinary settings, and to approaches that narrow the gap between virtual reality and actual reality.  Areas for future research are also identified in the paper.

  12. Parameter Estimation in Stochastic Grey-Box Models

    DEFF Research Database (Denmark)

    Kristensen, Niels Rode; Madsen, Henrik; Jørgensen, Sten Bay

    2004-01-01

    An efficient and flexible parameter estimation scheme for grey-box models in the sense of discretely, partially observed Ito stochastic differential equations with measurement noise is presented along with a corresponding software implementation. The estimation scheme is based on the extended...... Kalman filter and features maximum likelihood as well as maximum a posteriori estimation on multiple independent data sets, including irregularly sampled data sets and data sets with occasional outliers and missing observations. The software implementation is compared to an existing software tool...

  13. Estimation and variable selection for generalized additive partial linear models

    KAUST Repository

    Wang, Li

    2011-08-01

    We study generalized additive partial linear models, proposing the use of polynomial spline smoothing for estimation of nonparametric functions, and deriving quasi-likelihood based estimators for the linear parameters. We establish asymptotic normality for the estimators of the parametric components. The procedure avoids solving large systems of equations as in kernel-based procedures and thus results in gains in computational simplicity. We further develop a class of variable selection procedures for the linear parameters by employing a nonconcave penalized quasi-likelihood, which is shown to have an asymptotic oracle property. Monte Carlo simulations and an empirical example are presented for illustration. © Institute of Mathematical Statistics, 2011.

  14. Model Year 2017 Fuel Economy Guide: EPA Fuel Economy Estimates

    Energy Technology Data Exchange (ETDEWEB)

    None

    2016-11-01

    The Fuel Economy Guide is published by the U.S. Department of Energy as an aid to consumers considering the purchase of a new vehicle. The Guide lists estimates of miles per gallon (mpg) for each vehicle available for the new model year. These estimates are provided by the U.S. Environmental Protection Agency in compliance with Federal Law. By using this Guide, consumers can estimate the average yearly fuel cost for any vehicle. The Guide is intended to help consumers compare the fuel economy of similarly sized cars, light duty trucks and special purpose vehicles.

  15. Model Year 2011 Fuel Economy Guide: EPA Fuel Economy Estimates

    Energy Technology Data Exchange (ETDEWEB)

    None

    2010-11-01

    The Fuel Economy Guide is published by the U.S. Department of Energy as an aid to consumers considering the purchase of a new vehicle. The Guide lists estimates of miles per gallon (mpg) for each vehicle available for the new model year. These estimates are provided by the U.S. Environmental Protection Agency in compliance with Federal Law. By using this Guide, consumers can estimate the average yearly fuel cost for any vehicle. The Guide is intended to help consumers compare the fuel economy of similarly sized cars, light duty trucks and special purpose vehicles.

  16. Model Year 2012 Fuel Economy Guide: EPA Fuel Economy Estimates

    Energy Technology Data Exchange (ETDEWEB)

    None

    2011-11-01

    The Fuel Economy Guide is published by the U.S. Department of Energy as an aid to consumers considering the purchase of a new vehicle. The Guide lists estimates of miles per gallon (mpg) for each vehicle available for the new model year. These estimates are provided by the U.S. Environmental Protection Agency in compliance with Federal Law. By using this Guide, consumers can estimate the average yearly fuel cost for any vehicle. The Guide is intended to help consumers compare the fuel economy of similarly sized cars, light duty trucks and special purpose vehicles.

  17. Parameter Estimation of the Extended Vasiček Model

    OpenAIRE

    Rujivan, Sanae

    2010-01-01

    In this paper, an estimate of the drift and diffusion parameters of the extended Vasiček model is presented. The estimate is based on the method of maximum likelihood. We derive a closed-form expansion for the transition (probability) density of the extended Vasiček process and use the expansion to construct an approximate log-likelihood function of a discretely sampled data of the process. Approximate maximum likelihood estimators (AMLEs) of the parameters are obtained by maximizing the appr...

  18. Model Year 2013 Fuel Economy Guide: EPA Fuel Economy Estimates

    Energy Technology Data Exchange (ETDEWEB)

    None

    2012-12-01

    The Fuel Economy Guide is published by the U.S. Department of Energy as an aid to consumers considering the purchase of a new vehicle. The Guide lists estimates of miles per gallon (mpg) for each vehicle available for the new model year. These estimates are provided by the U.S. Environmental Protection Agency in compliance with Federal Law. By using this Guide, consumers can estimate the average yearly fuel cost for any vehicle. The Guide is intended to help consumers compare the fuel economy of similarly sized cars, light duty trucks and special purpose vehicles.

  19. A Bayesian framework for parameter estimation in dynamical models.

    Directory of Open Access Journals (Sweden)

    Flávio Codeço Coelho

    Full Text Available Mathematical models in biology are powerful tools for the study and exploration of complex dynamics. Nevertheless, bringing theoretical results to an agreement with experimental observations involves acknowledging a great deal of uncertainty intrinsic to our theoretical representation of a real system. Proper handling of such uncertainties is key to the successful usage of models to predict experimental or field observations. This problem has been addressed over the years by many tools for model calibration and parameter estimation. In this article we present a general framework for uncertainty analysis and parameter estimation that is designed to handle uncertainties associated with the modeling of dynamic biological systems while remaining agnostic as to the type of model used. We apply the framework to fit an SIR-like influenza transmission model to 7 years of incidence data in three European countries: Belgium, the Netherlands and Portugal.

  20. The Optimal Selection for Restricted Linear Models with Average Estimator

    Directory of Open Access Journals (Sweden)

    Qichang Xie

    2014-01-01

    Full Text Available The essential task of risk investment is to select an optimal tracking portfolio among various portfolios. Statistically, this process can be achieved by choosing an optimal restricted linear model. This paper develops a statistical procedure to do this, based on selecting appropriate weights for averaging approximately restricted models. The method of weighted average least squares is adopted to estimate the approximately restricted models under dependent error setting. The optimal weights are selected by minimizing a k-class generalized information criterion (k-GIC, which is an estimate of the average squared error from the model average fit. This model selection procedure is shown to be asymptotically optimal in the sense of obtaining the lowest possible average squared error. Monte Carlo simulations illustrate that the suggested method has comparable efficiency to some alternative model selection techniques.

  1. Estimation of exposure to toxic releases using spatial interaction modeling

    Directory of Open Access Journals (Sweden)

    Conley Jamison F

    2011-03-01

    Full Text Available Abstract Background The United States Environmental Protection Agency's Toxic Release Inventory (TRI data are frequently used to estimate a community's exposure to pollution. However, this estimation process often uses underdeveloped geographic theory. Spatial interaction modeling provides a more realistic approach to this estimation process. This paper uses four sets of data: lung cancer age-adjusted mortality rates from the years 1990 through 2006 inclusive from the National Cancer Institute's Surveillance Epidemiology and End Results (SEER database, TRI releases of carcinogens from 1987 to 1996, covariates associated with lung cancer, and the EPA's Risk-Screening Environmental Indicators (RSEI model. Results The impact of the volume of carcinogenic TRI releases on each county's lung cancer mortality rates was calculated using six spatial interaction functions (containment, buffer, power decay, exponential decay, quadratic decay, and RSEI estimates and evaluated with four multivariate regression methods (linear, generalized linear, spatial lag, and spatial error. Akaike Information Criterion values and P values of spatial interaction terms were computed. The impacts calculated from the interaction models were also mapped. Buffer and quadratic interaction functions had the lowest AIC values (22298 and 22525 respectively, although the gains from including the spatial interaction terms were diminished with spatial error and spatial lag regression. Conclusions The use of different methods for estimating the spatial risk posed by pollution from TRI sites can give different results about the impact of those sites on health outcomes. The most reliable estimates did not always come from the most complex methods.

  2. Models of economic geography: dynamics, estimation and policy evaluation

    OpenAIRE

    Knaap, Thijs

    2004-01-01

    In this thesis we look at economic geography models from a number of angles. We started by placing the theory in a context of preceding theories, both earlier work on spatial economics and other children of the monopolistic competition ‘revolution.’ Next, we looked at the theoretical properties of these models, especially when we allow firms to have different demand functions for intermediate goods. We estimated the model using a dataset on US states, and computed a number of counterfactuals....

  3. XLISP-Stat Tools for Building Generalised Estimating Equation Models

    Directory of Open Access Journals (Sweden)

    Thomas Lumley

    1996-12-01

    Full Text Available This paper describes a set of Lisp-Stat tools for building Generalised Estimating Equation models to analyse longitudinal or clustered measurements. The user interface is based on the built-in regression and generalised linear model prototypes, with the addition of object-based error functions, correlation structures and model formula tools. Residual and deletion diagnostic plots are available on the cluster and observation level and use the dynamic graphics capabilities of Lisp-Stat.

  4. Bayesian parameter estimation for nonlinear modelling of biological pathways

    Directory of Open Access Journals (Sweden)

    Ghasemi Omid

    2011-12-01

    Full Text Available Abstract Background The availability of temporal measurements on biological experiments has significantly promoted research areas in systems biology. To gain insight into the interaction and regulation of biological systems, mathematical frameworks such as ordinary differential equations have been widely applied to model biological pathways and interpret the temporal data. Hill equations are the preferred formats to represent the reaction rate in differential equation frameworks, due to their simple structures and their capabilities for easy fitting to saturated experimental measurements. However, Hill equations are highly nonlinearly parameterized functions, and parameters in these functions cannot be measured easily. Additionally, because of its high nonlinearity, adaptive parameter estimation algorithms developed for linear parameterized differential equations cannot be applied. Therefore, parameter estimation in nonlinearly parameterized differential equation models for biological pathways is both challenging and rewarding. In this study, we propose a Bayesian parameter estimation algorithm to estimate parameters in nonlinear mathematical models for biological pathways using time series data. Results We used the Runge-Kutta method to transform differential equations to difference equations assuming a known structure of the differential equations. This transformation allowed us to generate predictions dependent on previous states and to apply a Bayesian approach, namely, the Markov chain Monte Carlo (MCMC method. We applied this approach to the biological pathways involved in the left ventricle (LV response to myocardial infarction (MI and verified our algorithm by estimating two parameters in a Hill equation embedded in the nonlinear model. We further evaluated our estimation performance with different parameter settings and signal to noise ratios. Our results demonstrated the effectiveness of the algorithm for both linearly and nonlinearly

  5. Parameter estimation and model selection in computational biology.

    Directory of Open Access Journals (Sweden)

    Gabriele Lillacci

    2010-03-01

    Full Text Available A central challenge in computational modeling of biological systems is the determination of the model parameters. Typically, only a fraction of the parameters (such as kinetic rate constants are experimentally measured, while the rest are often fitted. The fitting process is usually based on experimental time course measurements of observables, which are used to assign parameter values that minimize some measure of the error between these measurements and the corresponding model prediction. The measurements, which can come from immunoblotting assays, fluorescent markers, etc., tend to be very noisy and taken at a limited number of time points. In this work we present a new approach to the problem of parameter selection of biological models. We show how one can use a dynamic recursive estimator, known as extended Kalman filter, to arrive at estimates of the model parameters. The proposed method follows. First, we use a variation of the Kalman filter that is particularly well suited to biological applications to obtain a first guess for the unknown parameters. Secondly, we employ an a posteriori identifiability test to check the reliability of the estimates. Finally, we solve an optimization problem to refine the first guess in case it should not be accurate enough. The final estimates are guaranteed to be statistically consistent with the measurements. Furthermore, we show how the same tools can be used to discriminate among alternate models of the same biological process. We demonstrate these ideas by applying our methods to two examples, namely a model of the heat shock response in E. coli, and a model of a synthetic gene regulation system. The methods presented are quite general and may be applied to a wide class of biological systems where noisy measurements are used for parameter estimation or model selection.

  6. Impacts of Stochastic Modeling on GPS-derived ZTD Estimations

    CERN Document Server

    Jin, Shuanggen

    2010-01-01

    GPS-derived ZTD (Zenith Tropospheric Delay) plays a key role in near real-time weather forecasting, especially in improving the precision of Numerical Weather Prediction (NWP) models. The ZTD is usually estimated using the first-order Gauss-Markov process with a fairly large correlation, and under the assumption that all the GPS measurements, carrier phases or pseudo-ranges, have the same accuracy. However, these assumptions are unrealistic. This paper aims to investigate the impact of several stochastic modeling methods on GPS-derived ZTD estimations using Australian IGS data. The results show that the accuracy of GPS-derived ZTD can be improved using a suitable stochastic model for the GPS measurements. The stochastic model using satellite elevation angle-based cosine function is better than other investigated stochastic models. It is noted that, when different stochastic modeling strategies are used, the variations in estimated ZTD can reach as much as 1cm. This improvement of ZTD estimation is certainly c...

  7. Synchronous Generator Model Parameter Estimation Based on Noisy Dynamic Waveforms

    Science.gov (United States)

    Berhausen, Sebastian; Paszek, Stefan

    2016-01-01

    In recent years, there have occurred system failures in many power systems all over the world. They have resulted in a lack of power supply to a large number of recipients. To minimize the risk of occurrence of power failures, it is necessary to perform multivariate investigations, including simulations, of power system operating conditions. To conduct reliable simulations, the current base of parameters of the models of generating units, containing the models of synchronous generators, is necessary. In the paper, there is presented a method for parameter estimation of a synchronous generator nonlinear model based on the analysis of selected transient waveforms caused by introducing a disturbance (in the form of a pseudorandom signal) in the generator voltage regulation channel. The parameter estimation was performed by minimizing the objective function defined as a mean square error for deviations between the measurement waveforms and the waveforms calculated based on the generator mathematical model. A hybrid algorithm was used for the minimization of the objective function. In the paper, there is described a filter system used for filtering the noisy measurement waveforms. The calculation results of the model of a 44 kW synchronous generator installed on a laboratory stand of the Institute of Electrical Engineering and Computer Science of the Silesian University of Technology are also given. The presented estimation method can be successfully applied to parameter estimation of different models of high-power synchronous generators operating in a power system.

  8. Missing data estimation in fMRI dynamic causal modeling.

    Science.gov (United States)

    Zaghlool, Shaza B; Wyatt, Christopher L

    2014-01-01

    Dynamic Causal Modeling (DCM) can be used to quantify cognitive function in individuals as effective connectivity. However, ambiguity among subjects in the number and location of discernible active regions prevents all candidate models from being compared in all subjects, precluding the use of DCM as an individual cognitive phenotyping tool. This paper proposes a solution to this problem by treating missing regions in the first-level analysis as missing data, and performing estimation of the time course associated with any missing region using one of four candidate methods: zero-filling, average-filling, noise-filling using a fixed stochastic process, or one estimated using expectation-maximization. The effect of this estimation scheme was analyzed by treating it as a preprocessing step to DCM and observing the resulting effects on model evidence. Simulation studies show that estimation using expectation-maximization yields the highest classification accuracy using a simple loss function and highest model evidence, relative to other methods. This result held for various dataset sizes and varying numbers of model choice. In real data, application to Go/No-Go and Simon tasks allowed computation of signals from the missing nodes and the consequent computation of model evidence in all subjects compared to 62 and 48 percent respectively if no preprocessing was performed. These results demonstrate the face validity of the preprocessing scheme and open the possibility of using single-subject DCM as an individual cognitive phenotyping tool.

  9. Near Shore Wave Modeling and applications to wave energy estimation

    Science.gov (United States)

    Zodiatis, G.; Galanis, G.; Hayes, D.; Nikolaidis, A.; Kalogeri, C.; Adam, A.; Kallos, G.; Georgiou, G.

    2012-04-01

    The estimation of the wave energy potential at the European coastline is receiving increased attention the last years as a result of the adaptation of novel policies in the energy market, the concernsfor global warming and the nuclear energy security problems. Within this framework, numerical wave modeling systems keep a primary role in the accurate description of wave climate and microclimate that is a prerequisite for any wave energy assessment study. In the present work two of the most popular wave models are used for the estimation of the wave parameters at the coastline of Cyprus: The latest parallel version of the wave model WAM (ECMWF version), which employs new parameterization of shallow water effects, and the SWAN model, classically used for near shore wave simulations. The results obtained from the wave models near shores are studied by an energy estimation point of view: The wave parameters that mainly affect the energy temporal and spatial distribution, that is the significant wave height and the mean wave period, are statistically analyzed,focusing onpossible different aspects captured by the two models. Moreover, the wave spectrum distribution prevailing in different areas are discussed contributing, in this way, to the wave energy assessmentin the area. This work is a part of two European projects focusing on the estimation of the wave energy distribution around Europe: The MARINA platform (http://www.marina-platform.info/ index.aspx) and the Ewave (http://www.oceanography.ucy.ac.cy/ewave/) projects.

  10. A new geometric-based model to accurately estimate arm and leg inertial estimates.

    Science.gov (United States)

    Wicke, Jason; Dumas, Geneviève A

    2014-06-03

    Segment estimates of mass, center of mass and moment of inertia are required input parameters to analyze the forces and moments acting across the joints. The objectives of this study were to propose a new geometric model for limb segments, to evaluate it against criterion values obtained from DXA, and to compare its performance to five other popular models. Twenty five female and 24 male college students participated in the study. For the criterion measures, the participants underwent a whole body DXA scan, and estimates for segment mass, center of mass location, and moment of inertia (frontal plane) were directly computed from the DXA mass units. For the new model, the volume was determined from two standing frontal and sagittal photographs. Each segment was modeled as a stack of slices, the sections of which were ellipses if they are not adjoining another segment and sectioned ellipses if they were adjoining another segment (e.g. upper arm and trunk). Length of axes of the ellipses was obtained from the photographs. In addition, a sex-specific, non-uniform density function was developed for each segment. A series of anthropometric measurements were also taken by directly following the definitions provided of the different body segment models tested, and the same parameters determined for each model. Comparison of models showed that estimates from the new model were consistently closer to the DXA criterion than those from the other models, with an error of less than 5% for mass and moment of inertia and less than about 6% for center of mass location. Copyright © 2014. Published by Elsevier Ltd.

  11. Coupling Hydrologic and Hydrodynamic Models to Estimate PMF

    Science.gov (United States)

    Felder, G.; Weingartner, R.

    2015-12-01

    Most sophisticated probable maximum flood (PMF) estimations derive the PMF from the probable maximum precipitation (PMP) by applying deterministic hydrologic models calibrated with observed data. This method is based on the assumption that the hydrological system is stationary, meaning that the system behaviour during the calibration period or the calibration event is presumed to be the same as it is during the PMF. However, as soon as a catchment-specific threshold is reached, the system is no longer stationary. At or beyond this threshold, retention areas, new flow paths, and changing runoff processes can strongly affect downstream peak discharge. These effects can be accounted for by coupling hydrologic and hydrodynamic models, a technique that is particularly promising when the expected peak discharge may considerably exceed the observed maximum discharge. In such cases, the coupling of hydrologic and hydraulic models has the potential to significantly increase the physical plausibility of PMF estimations. This procedure ensures both that the estimated extreme peak discharge does not exceed the physical limit based on riverbed capacity and that the dampening effect of inundation processes on peak discharge is considered. Our study discusses the prospect of considering retention effects on PMF estimations by coupling hydrologic and hydrodynamic models. This method is tested by forcing PREVAH, a semi-distributed deterministic hydrological model, with randomly generated, physically plausible extreme precipitation patterns. The resulting hydrographs are then used to externally force the hydraulic model BASEMENT-ETH (riverbed in 1D, potential inundation areas in 2D). Finally, the PMF estimation results obtained using the coupled modelling approach are compared to the results obtained using ordinary hydrologic modelling.

  12. Parameter Estimation of Population Pharmacokinetic Models with Stochastic Differential Equations: Implementation of an Estimation Algorithm

    Directory of Open Access Journals (Sweden)

    Fang-Rong Yan

    2014-01-01

    Full Text Available Population pharmacokinetic (PPK models play a pivotal role in quantitative pharmacology study, which are classically analyzed by nonlinear mixed-effects models based on ordinary differential equations. This paper describes the implementation of SDEs in population pharmacokinetic models, where parameters are estimated by a novel approximation of likelihood function. This approximation is constructed by combining the MCMC method used in nonlinear mixed-effects modeling with the extended Kalman filter used in SDE models. The analysis and simulation results show that the performance of the approximation of likelihood function for mixed-effects SDEs model and analysis of population pharmacokinetic data is reliable. The results suggest that the proposed method is feasible for the analysis of population pharmacokinetic data.

  13. Towards predictive food process models: A protocol for parameter estimation.

    Science.gov (United States)

    Vilas, Carlos; Arias-Méndez, Ana; Garcia, Miriam R; Alonso, Antonio A; Balsa-Canto, E

    2016-05-31

    Mathematical models, in particular, physics-based models, are essential tools to food product and process design, optimization and control. The success of mathematical models relies on their predictive capabilities. However, describing physical, chemical and biological changes in food processing requires the values of some, typically unknown, parameters. Therefore, parameter estimation from experimental data is critical to achieving desired model predictive properties. This work takes a new look into the parameter estimation (or identification) problem in food process modeling. First, we examine common pitfalls such as lack of identifiability and multimodality. Second, we present the theoretical background of a parameter identification protocol intended to deal with those challenges. And, to finish, we illustrate the performance of the proposed protocol with an example related to the thermal processing of packaged foods.

  14. Regionalized rainfall-runoff model to estimate low flow indices

    Science.gov (United States)

    Garcia, Florine; Folton, Nathalie; Oudin, Ludovic

    2016-04-01

    Estimating low flow indices is of paramount importance to manage water resources and risk assessments. These indices are derived from river discharges which are measured at gauged stations. However, the lack of observations at ungauged sites bring the necessity of developing methods to estimate these low flow indices from observed discharges in neighboring catchments and from catchment characteristics. Different estimation methods exist. Regression or geostatistical methods performed on the low flow indices are the most common types of methods. Another less common method consists in regionalizing rainfall-runoff model parameters, from catchment characteristics or by spatial proximity, to estimate low flow indices from simulated hydrographs. Irstea developed GR2M-LoiEau, a conceptual monthly rainfall-runoff model, combined with a regionalized model of snow storage and melt. GR2M-LoiEau relies on only two parameters, which are regionalized and mapped throughout France. This model allows to cartography monthly reference low flow indices. The inputs data come from SAFRAN, the distributed mesoscale atmospheric analysis system, which provides daily solid and liquid precipitation and temperature data from everywhere in the French territory. To exploit fully these data and to estimate daily low flow indices, a new version of GR-LoiEau has been developed at a daily time step. The aim of this work is to develop and regionalize a GR-LoiEau model that can provide any daily, monthly or annual estimations of low flow indices, yet keeping only a few parameters, which is a major advantage to regionalize them. This work includes two parts. On the one hand, a daily conceptual rainfall-runoff model is developed with only three parameters in order to simulate daily and monthly low flow indices, mean annual runoff and seasonality. On the other hand, different regionalization methods, based on spatial proximity and similarity, are tested to estimate the model parameters and to simulate

  15. Genomic breeding value estimation using nonparametric additive regression models

    Directory of Open Access Journals (Sweden)

    Solberg Trygve

    2009-01-01

    Full Text Available Abstract Genomic selection refers to the use of genomewide dense markers for breeding value estimation and subsequently for selection. The main challenge of genomic breeding value estimation is the estimation of many effects from a limited number of observations. Bayesian methods have been proposed to successfully cope with these challenges. As an alternative class of models, non- and semiparametric models were recently introduced. The present study investigated the ability of nonparametric additive regression models to predict genomic breeding values. The genotypes were modelled for each marker or pair of flanking markers (i.e. the predictors separately. The nonparametric functions for the predictors were estimated simultaneously using additive model theory, applying a binomial kernel. The optimal degree of smoothing was determined by bootstrapping. A mutation-drift-balance simulation was carried out. The breeding values of the last generation (genotyped was predicted using data from the next last generation (genotyped and phenotyped. The results show moderate to high accuracies of the predicted breeding values. A determination of predictor specific degree of smoothing increased the accuracy.

  16. Parameter Estimation for Single Diode Models of Photovoltaic Modules

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Clifford [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Photovoltaic and Distributed Systems Integration Dept.

    2015-03-01

    Many popular models for photovoltaic system performance employ a single diode model to compute the I - V curve for a module or string of modules at given irradiance and temperature conditions. A single diode model requires a number of parameters to be estimated from measured I - V curves. Many available parameter estimation methods use only short circuit, o pen circuit and maximum power points for a single I - V curve at standard test conditions together with temperature coefficients determined separately for individual cells. In contrast, module testing frequently records I - V curves over a wide range of irradi ance and temperature conditions which, when available , should also be used to parameterize the performance model. We present a parameter estimation method that makes use of a fu ll range of available I - V curves. We verify the accuracy of the method by recov ering known parameter values from simulated I - V curves . We validate the method by estimating model parameters for a module using outdoor test data and predicting the outdoor performance of the module.

  17. Modeling SMAP Spacecraft Attitude Control Estimation Error Using Signal Generation Model

    Science.gov (United States)

    Rizvi, Farheen

    2016-01-01

    Two ground simulation software are used to model the SMAP spacecraft dynamics. The CAST software uses a higher fidelity model than the ADAMS software. The ADAMS software models the spacecraft plant, controller and actuator models, and assumes a perfect sensor and estimator model. In this simulation study, the spacecraft dynamics results from the ADAMS software are used as CAST software is unavailable. The main source of spacecraft dynamics error in the higher fidelity CAST software is due to the estimation error. A signal generation model is developed to capture the effect of this estimation error in the overall spacecraft dynamics. Then, this signal generation model is included in the ADAMS software spacecraft dynamics estimate such that the results are similar to CAST. This signal generation model has similar characteristics mean, variance and power spectral density as the true CAST estimation error. In this way, ADAMS software can still be used while capturing the higher fidelity spacecraft dynamics modeling from CAST software.

  18. Biomass models to estimate carbon stocks for hardwood tree species

    Energy Technology Data Exchange (ETDEWEB)

    Ruiz-Peinado, R.; Montero, G.; Rio, M. del

    2012-11-01

    To estimate forest carbon pools from forest inventories it is necessary to have biomass models or biomass expansion factors. In this study, tree biomass models were developed for the main hardwood forest species in Spain: Alnus glutinosa, Castanea sativa, Ceratonia siliqua, Eucalyptus globulus, Fagus sylvatica, Fraxinus angustifolia, Olea europaea var. sylvestris, Populus x euramericana, Quercus canariensis, Quercus faginea, Quercus ilex, Quercus pyrenaica and Quercus suber. Different tree biomass components were considered: stem with bark, branches of different sizes, above and belowground biomass. For each species, a system of equations was fitted using seemingly unrelated regression, fulfilling the additivity property between biomass components. Diameter and total height were explored as independent variables. All models included tree diameter whereas for the majority of species, total height was only considered in the stem biomass models and in some of the branch models. The comparison of the new biomass models with previous models fitted separately for each tree component indicated an improvement in the accuracy of the models. A mean reduction of 20% in the root mean square error and a mean increase in the model efficiency of 7% in comparison with recently published models. So, the fitted models allow estimating more accurately the biomass stock in hardwood species from the Spanish National Forest Inventory data. (Author) 45 refs.

  19. Groundwater Modelling For Recharge Estimation Using Satellite Based Evapotranspiration

    Science.gov (United States)

    Soheili, Mahmoud; (Tom) Rientjes, T. H. M.; (Christiaan) van der Tol, C.

    2017-04-01

    Groundwater movement is influenced by several factors and processes in the hydrological cycle, from which, recharge is of high relevance. Since the amount of aquifer extractable water directly relates to the recharge amount, estimation of recharge is a perquisite of groundwater resources management. Recharge is highly affected by water loss mechanisms the major of which is actual evapotranspiration (ETa). It is, therefore, essential to have detailed assessment of ETa impact on groundwater recharge. The objective of this study was to evaluate how recharge was affected when satellite-based evapotranspiration was used instead of in-situ based ETa in the Salland area, the Netherlands. The Methodology for Interactive Planning for Water Management (MIPWA) model setup which includes a groundwater model for the northern part of the Netherlands was used for recharge estimation. The Surface Energy Balance Algorithm for Land (SEBAL) based actual evapotranspiration maps from Waterschap Groot Salland were also used. Comparison of SEBAL based ETa estimates with in-situ abased estimates in the Netherlands showed that these SEBAL estimates were not reliable. As such results could not serve for calibrating root zone parameters in the CAPSIM model. The annual cumulative ETa map produced by the model showed that the maximum amount of evapotranspiration occurs in mixed forest areas in the northeast and a portion of central parts. Estimates ranged from 579 mm to a minimum of 0 mm in the highest elevated areas with woody vegetation in the southeast of the region. Variations in mean seasonal hydraulic head and groundwater level for each layer showed that the hydraulic gradient follows elevation in the Salland area from southeast (maximum) to northwest (minimum) of the region which depicts the groundwater flow direction. The mean seasonal water balance in CAPSIM part was evaluated to represent recharge estimation in the first layer. The highest recharge estimated flux was for autumn

  20. Deconvolution Estimation in Measurement Error Models: The R Package decon

    Directory of Open Access Journals (Sweden)

    Xiao-Feng Wang

    2011-03-01

    Full Text Available Data from many scientific areas often come with measurement error. Density or distribution function estimation from contaminated data and nonparametric regression with errors in variables are two important topics in measurement error models. In this paper, we present a new software package decon for R, which contains a collection of functions that use the deconvolution kernel methods to deal with the measurement error problems. The functions allow the errors to be either homoscedastic or heteroscedastic. To make the deconvolution estimators computationally more efficient in R, we adapt the fast Fourier transform algorithm for density estimation with error-free data to the deconvolution kernel estimation. We discuss the practical selection of the smoothing parameter in deconvolution methods and illustrate the use of the package through both simulated and real examples.

  1. Parameter Estimation of the Extended Vasiček Model

    Directory of Open Access Journals (Sweden)

    Sanae RUJIVAN

    2010-01-01

    Full Text Available In this paper, an estimate of the drift and diffusion parameters of the extended Vasiček model is presented. The estimate is based on the method of maximum likelihood. We derive a closed-form expansion for the transition (probability density of the extended Vasiček process and use the expansion to construct an approximate log-likelihood function of a discretely sampled data of the process. Approximate maximum likelihood estimators (AMLEs of the parameters are obtained by maximizing the approximate log-likelihood function. The convergence of the AMLEs to the true maximum likelihood estimators is obtained by increasing the number of terms in the expansions with a small time step size.

  2. Comparisons of Maximum Likelihood Estimates and Bayesian Estimates for the Discretized Discovery Process Model

    Institute of Scientific and Technical Information of China (English)

    GaoChunwen; XuJingzhen; RichardSinding-Larsen

    2005-01-01

    A Bayesian approach using Markov chain Monte Carlo algorithms has been developed to analyze Smith's discretized version of the discovery process model. It avoids the problems involved in the maximum likelihood method by effectively making use of the information from the prior distribution and that from the discovery sequence according to posterior probabilities. All statistical inferences about the parameters of the model and total resources can be quantified by drawing samples directly from the joint posterior distribution. In addition, statistical errors of the samples can be easily assessed and the convergence properties can be monitored during the sampling. Because the information contained in a discovery sequence is not enough to estimate all parameters, especially the number of fields, geologically justified prior information is crucial to the estimation. The Bayesian approach allows the analyst to specify his subjective estimates of the required parameters and his degree of uncertainty about the estimates in a clearly identified fashion throughout the analysis. As an example, this approach is applied to the same data of the North Sea on which Smith demonstrated his maximum likelihood method. For this case, the Bayesian approach has really improved the overly pessimistic results and downward bias of the maximum likelihood procedure.

  3. Subdaily Earth Rotation Models Estimated From GPS and VLBI Data

    Science.gov (United States)

    Steigenberger, P.; Tesmer, V.; MacMillan, D.; Thaller, D.; Rothacher, M.; Fritsche, M.; Rülke, A.; Dietrich, R.

    2007-12-01

    Subdaily changes in Earth rotation at diurnal and semi-diurnal periods are mainly caused by ocean tides. Smaller effects are attributed to the interaction of the atmosphere with the solid Earth. As the tidal periods are well known, models for the ocean tidal contribution to high-frequency Earth rotation variations can be estimated from space- geodetic observations. The subdaily ERP model recommended by the latest IERS conventions was derived from an ocean tide model based on satellite altimetry. Another possibility is the determination of subdaily ERP models from GPS- and/or VLBI-derived Earth rotation parameter series with subdaily resolution. Homogeneously reprocessed long-time series of subdaily ERPs computed by GFZ/TU Dresden (12 years of GPS data), DGFI and GSFC (both with 24 years of VLBI data) provide the basis for the estimation of single-technique and combined subdaily ERP models. The impact of different processing options (e.g., weighting) and different temporal resolutions (1 hour vs. 2 hours) will be evaluated by comparisons of the different models amongst each other and with the IERS model. The analysis of the GPS and VLBI residual signals after subtracting the estimated ocean tidal contribution may help to answer the question whether the remaining signals are technique-specific artifacts and systematic errors or true geophysical signals detected by both techniques.

  4. Conical-Domain Model for Estimating GPS Ionospheric Delays

    Science.gov (United States)

    Sparks, Lawrence; Komjathy, Attila; Mannucci, Anthony

    2009-01-01

    The conical-domain model is a computational model, now undergoing development, for estimating ionospheric delays of Global Positioning System (GPS) signals. Relative to the standard ionospheric delay model described below, the conical-domain model offers improved accuracy. In the absence of selective availability, the ionosphere is the largest source of error for single-frequency users of GPS. Because ionospheric signal delays contribute to errors in GPS position and time measurements, satellite-based augmentation systems (SBASs) have been designed to estimate these delays and broadcast corrections. Several national and international SBASs are currently in various stages of development to enhance the integrity and accuracy of GPS measurements for airline navigation. In the Wide Area Augmentation System (WAAS) of the United States, slant ionospheric delay errors and confidence bounds are derived from estimates of vertical ionospheric delay modeled on a grid at regularly spaced intervals of latitude and longitude. The estimate of vertical delay at each ionospheric grid point (IGP) is calculated from a planar fit of neighboring slant delay measurements, projected to vertical using a standard, thin-shell model of the ionosphere. Interpolation on the WAAS grid enables estimation of the vertical delay at the ionospheric pierce point (IPP) corresponding to any arbitrary measurement of a user. (The IPP of a given user s measurement is the point where the GPS signal ray path intersects a reference ionospheric height.) The product of the interpolated value and the user s thin-shell obliquity factor provides an estimate of the user s ionospheric slant delay. Two types of error that restrict the accuracy of the thin-shell model are absent in the conical domain model: (1) error due to the implicit assumption that the electron density is independent of the azimuthal angle at the IPP and (2) error arising from the slant-to-vertical conversion. At low latitudes or at mid

  5. Adaptive Error Estimation in Linearized Ocean General Circulation Models

    Science.gov (United States)

    Chechelnitsky, Michael Y.

    1999-01-01

    Data assimilation methods are routinely used in oceanography. The statistics of the model and measurement errors need to be specified a priori. This study addresses the problem of estimating model and measurement error statistics from observations. We start by testing innovation based methods of adaptive error estimation with low-dimensional models in the North Pacific (5-60 deg N, 132-252 deg E) to TOPEX/POSEIDON (TIP) sea level anomaly data, acoustic tomography data from the ATOC project, and the MIT General Circulation Model (GCM). A reduced state linear model that describes large scale internal (baroclinic) error dynamics is used. The methods are shown to be sensitive to the initial guess for the error statistics and the type of observations. A new off-line approach is developed, the covariance matching approach (CMA), where covariance matrices of model-data residuals are "matched" to their theoretical expectations using familiar least squares methods. This method uses observations directly instead of the innovations sequence and is shown to be related to the MT method and the method of Fu et al. (1993). Twin experiments using the same linearized MIT GCM suggest that altimetric data are ill-suited to the estimation of internal GCM errors, but that such estimates can in theory be obtained using acoustic data. The CMA is then applied to T/P sea level anomaly data and a linearization of a global GFDL GCM which uses two vertical modes. We show that the CMA method can be used with a global model and a global data set, and that the estimates of the error statistics are robust. We show that the fraction of the GCM-T/P residual variance explained by the model error is larger than that derived in Fukumori et al.(1999) with the method of Fu et al.(1993). Most of the model error is explained by the barotropic mode. However, we find that impact of the change in the error statistics on the data assimilation estimates is very small. This is explained by the large

  6. Robust Head Pose Estimation Using a 3D Morphable Model

    Directory of Open Access Journals (Sweden)

    Ying Cai

    2015-01-01

    Full Text Available Head pose estimation from single 2D images has been considered as an important and challenging research task in computer vision. This paper presents a novel head pose estimation method which utilizes the shape model of the Basel face model and five fiducial points in faces. It adjusts shape deformation according to Laplace distribution to afford the shape variation across different persons. A new matching method based on PSO (particle swarm optimization algorithm is applied both to reduce the time cost of shape reconstruction and to achieve higher accuracy than traditional optimization methods. In order to objectively evaluate accuracy, we proposed a new way to compute the pose estimation errors. Experiments on the BFM-synthetic database, the BU-3DFE database, the CUbiC FacePix database, the CMU PIE face database, and the CAS-PEAL-R1 database show that the proposed method is robust, accurate, and computationally efficient.

  7. Reducing component estimation for varying coefficient models with longitudinal data

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Varying-coefficient models with longitudinal observations are very useful in epidemiology and some other practical fields.In this paper,a reducing component procedure is proposed for es- timating the unknown functions and their derivatives in very general models,in which the unknown coefficient functions admit different or the same degrees of smoothness and the covariates can be time- dependent.The asymptotic properties of the estimators,such as consistency,rate of convergence and asymptotic distribution,are derived.The asymptotic results show that the asymptotic variance of the reducing component estimators is smaller than that of the existing estimators when the coefficient functions admit different degrees of smoothness.Finite sample properties of our procedures are studied through Monte Carlo simulations.

  8. Parameter estimation in stochastic rainfall-runoff models

    DEFF Research Database (Denmark)

    Jonsdottir, Harpa; Madsen, Henrik; Palsson, Olafur Petur

    2006-01-01

    the parameters, including the noise terms. The parameter estimation method is a maximum likelihood method (ML) where the likelihood function is evaluated using a Kalman filter technique. The ML method estimates the parameters in a prediction error settings, i.e. the sum of squared prediction error is minimized....... For a comparison the parameters are also estimated by an output error method, where the sum of squared simulation error is minimized. The former methodology is optimal for short-term prediction whereas the latter is optimal for simulations. Hence, depending on the purpose it is possible to select whether...... the parameter values are optimal for simulation or prediction. The data originates from Iceland and the model is designed for Icelandic conditions, including a snow routine for mountainous areas. The model demands only two input data series, precipitation and temperature and one output data series...

  9. Autoregressive model selection with simultaneous sparse coefficient estimation

    CERN Document Server

    Sang, Hailin

    2011-01-01

    In this paper we propose a sparse coefficient estimation procedure for autoregressive (AR) models based on penalized conditional maximum likelihood. The penalized conditional maximum likelihood estimator (PCMLE) thus developed has the advantage of performing simultaneous coefficient estimation and model selection. Mild conditions are given on the penalty function and the innovation process, under which the PCMLE satisfies a strong consistency, local $N^{-1/2}$ consistency, and oracle property, respectively, where N is sample size. Two penalty functions, least absolute shrinkage and selection operator (LASSO) and smoothly clipped average deviation (SCAD), are considered as examples, and SCAD is shown to have better performances than LASSO. A simulation study confirms our theoretical results. At the end, we provide an application of our method to a historical price data of the US Industrial Production Index for consumer goods, and the result is very promising.

  10. Modified pendulum model for mean step length estimation.

    Science.gov (United States)

    González, Rafael C; Alvarez, Diego; López, Antonio M; Alvarez, Juan C

    2007-01-01

    Step length estimation is an important issue in areas such as gait analysis, sport training or pedestrian localization. It has been shown that the mean step length can be computed by means of a triaxial accelerometer placed near the center of gravity of the human body. Estimations based on the inverted pendulum model are prone to underestimate the step length, and must be corrected by calibration. In this paper we present a modified pendulum model in which all the parameters correspond to anthropometric data of the individual. The method has been tested with a set of volunteers, both males and females. Experimental results show that this method provides an unbiased estimation of the actual displacement with a standard deviation lower than 2.1%.

  11. HIGH DIMENSIONAL COVARIANCE MATRIX ESTIMATION IN APPROXIMATE FACTOR MODELS.

    Science.gov (United States)

    Fan, Jianqing; Liao, Yuan; Mincheva, Martina

    2011-01-01

    The variance covariance matrix plays a central role in the inferential theories of high dimensional factor models in finance and economics. Popular regularization methods of directly exploiting sparsity are not directly applicable to many financial problems. Classical methods of estimating the covariance matrices are based on the strict factor models, assuming independent idiosyncratic components. This assumption, however, is restrictive in practical applications. By assuming sparse error covariance matrix, we allow the presence of the cross-sectional correlation even after taking out common factors, and it enables us to combine the merits of both methods. We estimate the sparse covariance using the adaptive thresholding technique as in Cai and Liu (2011), taking into account the fact that direct observations of the idiosyncratic components are unavailable. The impact of high dimensionality on the covariance matrix estimation based on the factor structure is then studied.

  12. High-dimensional covariance matrix estimation in approximate factor models

    CERN Document Server

    Fan, Jianqing; Mincheva, Martina; 10.1214/11-AOS944

    2012-01-01

    The variance--covariance matrix plays a central role in the inferential theories of high-dimensional factor models in finance and economics. Popular regularization methods of directly exploiting sparsity are not directly applicable to many financial problems. Classical methods of estimating the covariance matrices are based on the strict factor models, assuming independent idiosyncratic components. This assumption, however, is restrictive in practical applications. By assuming sparse error covariance matrix, we allow the presence of the cross-sectional correlation even after taking out common factors, and it enables us to combine the merits of both methods. We estimate the sparse covariance using the adaptive thresholding technique as in Cai and Liu [J. Amer. Statist. Assoc. 106 (2011) 672--684], taking into account the fact that direct observations of the idiosyncratic components are unavailable. The impact of high dimensionality on the covariance matrix estimation based on the factor structure is then studi...

  13. Estimation of traffic accident costs: a prompted model.

    Science.gov (United States)

    Hejazi, Rokhshad; Shamsudin, Mad Nasir; Radam, Alias; Rahim, Khalid Abdul; Ibrahim, Zelina Zaitun; Yazdani, Saeed

    2013-01-01

    Traffic accidents are the reason for 25% of unnatural deaths in Iran. The main objective of this study is to find a simple model for the estimation of economic costs especially in Islamic countries (like Iran) in a straightforward manner. The model can show the magnitude of traffic accident costs with monetary equivalent. Data were collected from different sources that included traffic police records, insurance companies and hospitals. The conceptual framework, in our study, was based on the method of Ayati. He used this method for the estimation of economic costs in Iran. We promoted his method via minimum variables. Our final model has only three available variables which can be taken from insurance companies and police records. The running model showed that the traffic accident costs were US$2.2 million in 2007 for our case study route.

  14. Estimating the ETAS model from an early aftershock sequence

    Science.gov (United States)

    Omi, Takahiro; Ogata, Yosihiko; Hirata, Yoshito; Aihara, Kazuyuki

    2014-02-01

    Forecasting aftershock probabilities, as early as possible after a main shock, is required to mitigate seismic risks in the disaster area. In general, aftershock activity can be complex, including secondary aftershocks or even triggering larger earthquakes. However, this early forecasting implementation has been difficult because numerous aftershocks are unobserved immediately after the main shock due to dense overlapping of seismic waves. Here we propose a method for estimating parameters of the epidemic type aftershock sequence (ETAS) model from incompletely observed aftershocks shortly after the main shock by modeling an empirical feature of data deficiency. Such an ETAS model can effectively forecast the following aftershock occurrences. For example, the ETAS model estimated from the first 24 h data after the main shock can well forecast secondary aftershocks after strong aftershocks. This method can be useful in early and unbiased assessment of the aftershock hazard.

  15. The Impact of Statistical Leakage Models on Design Yield Estimation

    Directory of Open Access Journals (Sweden)

    Rouwaida Kanj

    2011-01-01

    Full Text Available Device mismatch and process variation models play a key role in determining the functionality and yield of sub-100 nm design. Average characteristics are often of interest, such as the average leakage current or the average read delay. However, detecting rare functional fails is critical for memory design and designers often seek techniques that enable accurately modeling such events. Extremely leaky devices can inflict functionality fails. The plurality of leaky devices on a bitline increase the dimensionality of the yield estimation problem. Simplified models are possible by adopting approximations to the underlying sum of lognormals. The implications of such approximations on tail probabilities may in turn bias the yield estimate. We review different closed form approximations and compare against the CDF matching method, which is shown to be most effective method for accurate statistical leakage modeling.

  16. Modelling and Estimation of Hammerstein System with Preload Nonlinearity

    Directory of Open Access Journals (Sweden)

    Khaled ELLEUCH

    2010-12-01

    Full Text Available This paper deals with modelling and parameter identification of nonlinear systems described by Hammerstein model having asymmetric static nonlinearities known as preload nonlinearity characteristic. The simultaneous use of both an easy decomposition technique and the generalized orthonormal bases leads to a particular form of Hammerstein model containing a minimal parameters number. The employ of orthonormal bases for the description of the linear dynamic block conducts to a linear regressor model, so that least squares techniques can be used for the parameter estimation. Singular Values Decomposition (SVD technique has been applied to separate the coupled parameters. To demonstrate the feasibility of the identification method, an illustrative example is included.

  17. MAXIMUM LIKELIHOOD ESTIMATION FOR PERIODIC AUTOREGRESSIVE MOVING AVERAGE MODELS.

    Science.gov (United States)

    Vecchia, A.V.

    1985-01-01

    A useful class of models for seasonal time series that cannot be filtered or standardized to achieve second-order stationarity is that of periodic autoregressive moving average (PARMA) models, which are extensions of ARMA models that allow periodic (seasonal) parameters. An approximation to the exact likelihood for Gaussian PARMA processes is developed, and a straightforward algorithm for its maximization is presented. The algorithm is tested on several periodic ARMA(1, 1) models through simulation studies and is compared to moment estimation via the seasonal Yule-Walker equations. Applicability of the technique is demonstrated through an analysis of a seasonal stream-flow series from the Rio Caroni River in Venezuela.

  18. Modeling, estimation and optimal filtration in signal processing

    CERN Document Server

    Najim, Mohamed

    2010-01-01

    The purpose of this book is to provide graduate students and practitioners with traditional methods and more recent results for model-based approaches in signal processing.Firstly, discrete-time linear models such as AR, MA and ARMA models, their properties and their limitations are introduced. In addition, sinusoidal models are addressed.Secondly, estimation approaches based on least squares methods and instrumental variable techniques are presented.Finally, the book deals with optimal filters, i.e. Wiener and Kalman filtering, and adaptive filters such as the RLS, the LMS and the

  19. Hybrid Simulation Modeling to Estimate U.S. Energy Elasticities

    Science.gov (United States)

    Baylin-Stern, Adam C.

    This paper demonstrates how an U.S. application of CIMS, a technologically explicit and behaviourally realistic energy-economy simulation model which includes macro-economic feedbacks, can be used to derive estimates of elasticity of substitution (ESUB) and autonomous energy efficiency index (AEEI) parameters. The ability of economies to reduce greenhouse gas emissions depends on the potential for households and industry to decrease overall energy usage, and move from higher to lower emissions fuels. Energy economists commonly refer to ESUB estimates to understand the degree of responsiveness of various sectors of an economy, and use estimates to inform computable general equilibrium models used to study climate policies. Using CIMS, I have generated a set of future, 'pseudo-data' based on a series of simulations in which I vary energy and capital input prices over a wide range. I then used this data set to estimate the parameters for transcendental logarithmic production functions using regression techniques. From the production function parameter estimates, I calculated an array of elasticity of substitution values between input pairs. Additionally, this paper demonstrates how CIMS can be used to calculate price-independent changes in energy-efficiency in the form of the AEEI, by comparing energy consumption between technologically frozen and 'business as usual' simulations. The paper concludes with some ideas for model and methodological improvement, and how these might figure into future work in the estimation of ESUBs from CIMS. Keywords: Elasticity of substitution; hybrid energy-economy model; translog; autonomous energy efficiency index; rebound effect; fuel switching.

  20. Modeling hypoxia in the Chesapeake Bay: Ensemble estimation using a Bayesian hierarchical model

    Science.gov (United States)

    Stow, Craig A.; Scavia, Donald

    2009-02-01

    Quantifying parameter and prediction uncertainty in a rigorous framework can be an important component of model skill assessment. Generally, models with lower uncertainty will be more useful for prediction and inference than models with higher uncertainty. Ensemble estimation, an idea with deep roots in the Bayesian literature, can be useful to reduce model uncertainty. It is based on the idea that simultaneously estimating common or similar parameters among models can result in more precise estimates. We demonstrate this approach using the Streeter-Phelps dissolved oxygen sag model fit to 29 years of data from Chesapeake Bay. Chesapeake Bay has a long history of bottom water hypoxia and several models are being used to assist management decision-making in this system. The Bayesian framework is particularly useful in a decision context because it can combine both expert-judgment and rigorous parameter estimation to yield model forecasts and a probabilistic estimate of the forecast uncertainty.

  1. An Instructional Cost Estimation Model for the XYZ Community College.

    Science.gov (United States)

    Edmonson, William F.

    An enrollment-driven model for estimating instructional costs is presented in this paper as developed by the Western Interstate Commission for Higher Education (WICHE). After stating the principles of the WICHE planning system (i.e., various categories of data are gathered, segmented, and then cross-tabulated against one another to yield certain…

  2. Remote sensing estimates of impervious surfaces for pluvial flood modelling

    DEFF Research Database (Denmark)

    Kaspersen, Per Skougaard; Drews, Martin

    This paper investigates the accuracy of medium resolution (MR) satellite imagery in estimating impervious surfaces for European cities at the detail required for pluvial flood modelling. Using remote sensing techniques enables precise and systematic quantification of the influence of the past 30...

  3. Linear Factor Models and the Estimation of Expected Returns

    NARCIS (Netherlands)

    Sarisoy, Cisil; de Goeij, Peter; Werker, Bas

    2016-01-01

    Linear factor models of asset pricing imply a linear relationship between expected returns of assets and exposures to one or more sources of risk. We show that exploiting this linear relationship leads to statistical gains of up to 31% in variances when estimating expected returns on individual asse

  4. Shell Model Estimate of Electric Dipole Moments for Xe Isotopes

    Science.gov (United States)

    Teruya, Eri; Yoshinaga, Naotaka; Higashiyama, Koji

    The nuclear Schiff moments of Xe isotopes which induce electric dipole moments of neutral Xe atoms is theoretically estimated. Parity and time-reversal violating two-body nuclear interactions are assumed. The nuclear wave functions are calculated in terms of the nuclear shell model. Influences of core excitations on the Schiff moments in addition to the over-shell excitations are discussed.

  5. Marine boundary-layer height estimated from the HIRLAM model

    DEFF Research Database (Denmark)

    Gryning, Sven-Erik; Batchvarova, E.

    2002-01-01

    -number estimates based on output from the operational numerical weather prediction model HIRLAM (a version of SMHI with a grid resolution of 22.5 km x 22.5 km). For southwesterly winds it was found that a relatively large island (Bornholm) lying 20 km upwind of the measuring site influences the boundary...

  6. Method of moments estimation of GO-GARCH models

    NARCIS (Netherlands)

    Boswijk, H.P.; van der Weide, R.

    2009-01-01

    We propose a new estimation method for the factor loading matrix in generalized orthogonal GARCH (GO-GARCH) models. The method is based on the eigenvectors of a suitably defined sample autocorrelation matrix of squares and cross-products of the process. The method can therefore be easily applied to

  7. Parameter Estimation for a Computable General Equilibrium Model

    DEFF Research Database (Denmark)

    Arndt, Channing; Robinson, Sherman; Tarp, Finn

    . Second, it permits incorporation of prior information on parameter values. Third, it can be applied in the absence of copious data. Finally, it supplies measures of the capacity of the model to reproduce the historical record and the statistical significance of parameter estimates. The method is applied...

  8. Time-of-flight estimation based on covariance models

    NARCIS (Netherlands)

    van der Heijden, Ferdinand; Tuquerres, G.; Regtien, Paulus P.L.

    We address the problem of estimating the time-of-flight (ToF) of a waveform that is disturbed heavily by additional reflections from nearby objects. These additional reflections cause interference patterns that are difficult to predict. The introduction of a model for the reflection in terms of a

  9. Maximum Likelihood Estimation of Nonlinear Structural Equation Models.

    Science.gov (United States)

    Lee, Sik-Yum; Zhu, Hong-Tu

    2002-01-01

    Developed an EM type algorithm for maximum likelihood estimation of a general nonlinear structural equation model in which the E-step is completed by a Metropolis-Hastings algorithm. Illustrated the methodology with results from a simulation study and two real examples using data from previous studies. (SLD)

  10. Lightweight Graphical Models for Selectivity Estimation Without Independence Assumptions

    DEFF Research Database (Denmark)

    Tzoumas, Kostas; Deshpande, Amol; Jensen, Christian S.

    2011-01-01

    ’s optimizers are frequently caused by missed correlations between attributes. We present a selectivity estimation approach that does not make the independence assumptions. By carefully using concepts from the field of graphical models, we are able to factor the joint probability distribution of all...

  11. Constrained Optimization Approaches to Estimation of Structural Models

    DEFF Research Database (Denmark)

    Iskhakov, Fedor; Jinhyuk, Lee; Rust, John;

    2016-01-01

    We revisit the comparison of mathematical programming with equilibrium constraints (MPEC) and nested fixed point (NFXP) algorithms for estimating structural dynamic models by Su and Judd (SJ, 2012). Their implementation of the nested fixed point algorithm used successive approximations to solve t...

  12. Constrained Optimization Approaches to Estimation of Structural Models

    DEFF Research Database (Denmark)

    Iskhakov, Fedor; Rust, John; Schjerning, Bertel;

    2015-01-01

    We revisit the comparison of mathematical programming with equilibrium constraints (MPEC) and nested fixed point (NFXP) algorithms for estimating structural dynamic models by Su and Judd (SJ, 2012). They used an inefficient version of the nested fixed point algorithm that relies on successive app...

  13. Effects of Sample Size, Estimation Methods, and Model Specification on Structural Equation Modeling Fit Indexes.

    Science.gov (United States)

    Fan, Xitao; Wang, Lin; Thompson, Bruce

    1999-01-01

    A Monte Carlo simulation study investigated the effects on 10 structural equation modeling fit indexes of sample size, estimation method, and model specification. Some fit indexes did not appear to be comparable, and it was apparent that estimation method strongly influenced almost all fit indexes examined, especially for misspecified models. (SLD)

  14. Estimating Nonlinear Structural Models: EMM and the Kenny-Judd Model

    Science.gov (United States)

    Lyhagen, Johan

    2007-01-01

    The estimation of nonlinear structural models is not trivial. One reason for this is that a closed form solution of the likelihood may not be feasible or does not exist. We propose to estimate nonlinear structural models using the efficient method of moments, as generating data according to the models is often very easy. A simulation study of the…

  15. An integrated modelling approach to estimate urban traffic emissions

    Science.gov (United States)

    Misra, Aarshabh; Roorda, Matthew J.; MacLean, Heather L.

    2013-07-01

    An integrated modelling approach is adopted to estimate microscale urban traffic emissions. The modelling framework consists of a traffic microsimulation model developed in PARAMICS, a microscopic emissions model (Comprehensive Modal Emissions Model), and two dispersion models, AERMOD and the Quick Urban and Industrial Complex (QUIC). This framework is applied to a traffic network in downtown Toronto, Canada to evaluate summer time morning peak traffic emissions of carbon monoxide (CO) and nitrogen oxides (NOx) during five weekdays at a traffic intersection. The model predicted results are validated against sensor observations with 100% of the AERMOD modelled CO concentrations and 97.5% of the QUIC modelled NOx concentrations within a factor of two of the corresponding observed concentrations. Availability of local estimates of ambient concentration is useful for accurate comparisons of predicted concentrations with observed concentrations. Predicted and sensor measured concentrations are significantly lower than the hourly threshold Maximum Acceptable Levels for CO (31 ppm, ˜90 times lower) and NO2 (0.4 mg/m3, ˜12 times lower), within the National Ambient Air Quality Objectives established by Environment Canada.

  16. Estimating Predictive Variance for Statistical Gas Distribution Modelling

    Science.gov (United States)

    Lilienthal, Achim J.; Asadi, Sahar; Reggente, Matteo

    2009-05-01

    Recent publications in statistical gas distribution modelling have proposed algorithms that model mean and variance of a distribution. This paper argues that estimating the predictive concentration variance entails not only a gradual improvement but is rather a significant step to advance the field. This is, first, since the models much better fit the particular structure of gas distributions, which exhibit strong fluctuations with considerable spatial variations as a result of the intermittent character of gas dispersal. Second, because estimating the predictive variance allows to evaluate the model quality in terms of the data likelihood. This offers a solution to the problem of ground truth evaluation, which has always been a critical issue for gas distribution modelling. It also enables solid comparisons of different modelling approaches, and provides the means to learn meta parameters of the model, to determine when the model should be updated or re-initialised, or to suggest new measurement locations based on the current model. We also point out directions of related ongoing or potential future research work.

  17. Tyre pressure monitoring using a dynamical model-based estimator

    Science.gov (United States)

    Reina, Giulio; Gentile, Angelo; Messina, Arcangelo

    2015-04-01

    In the last few years, various control systems have been investigated in the automotive field with the aim of increasing the level of safety and stability, avoid roll-over, and customise handling characteristics. One critical issue connected with their integration is the lack of state and parameter information. As an example, vehicle handling depends to a large extent on tyre inflation pressure. When inflation pressure drops, handling and comfort performance generally deteriorate. In addition, it results in an increase in fuel consumption and in a decrease in lifetime. Therefore, it is important to keep tyres within the normal inflation pressure range. This paper introduces a model-based approach to estimate online tyre inflation pressure. First, basic vertical dynamic modelling of the vehicle is discussed. Then, a parameter estimation framework for dynamic analysis is presented. Several important vehicle parameters including tyre inflation pressure can be estimated using the estimated states. This method aims to work during normal driving using information from standard sensors only. On the one hand, the driver is informed about the inflation pressure and he is warned for sudden changes. On the other hand, accurate estimation of the vehicle states is available as possible input to onboard control systems.

  18. Robust estimation of unbalanced mixture models on samples with outliers.

    Science.gov (United States)

    Galimzianova, Alfiia; Pernuš, Franjo; Likar, Boštjan; Špiclin, Žiga

    2015-11-01

    Mixture models are often used to compactly represent samples from heterogeneous sources. However, in real world, the samples generally contain an unknown fraction of outliers and the sources generate different or unbalanced numbers of observations. Such unbalanced and contaminated samples may, for instance, be obtained by high density data sensors such as imaging devices. Estimation of unbalanced mixture models from samples with outliers requires robust estimation methods. In this paper, we propose a novel robust mixture estimator incorporating trimming of the outliers based on component-wise confidence level ordering of observations. The proposed method is validated and compared to the state-of-the-art FAST-TLE method on two data sets, one consisting of synthetic samples with a varying fraction of outliers and a varying balance between mixture weights, while the other data set contained structural magnetic resonance images of the brain with tumors of varying volumes. The results on both data sets clearly indicate that the proposed method is capable to robustly estimate unbalanced mixtures over a broad range of outlier fractions. As such, it is applicable to real-world samples, in which the outlier fraction cannot be estimated in advance.

  19. Impact of transport model errors on the global and regional methane emissions estimated by inverse modelling

    Directory of Open Access Journals (Sweden)

    R. Locatelli

    2013-04-01

    Full Text Available A modelling experiment has been conceived to assess the impact of transport model errors on the methane emissions estimated by an atmospheric inversion system. Synthetic methane observations, given by 10 different model outputs from the international TransCom-CH4 model exercise, are combined with a prior scenario of methane emissions and sinks, and integrated into the PYVAR-LMDZ-SACS inverse system to produce 10 different methane emission estimates at the global scale for the year 2005. The same set-up has been used to produce the synthetic observations and to compute flux estimates by inverse modelling, which means that only differences in the modelling of atmospheric transport may cause differences in the estimated fluxes. In our framework, we show that transport model errors lead to a discrepancy of 27 Tg CH4 per year at the global scale, representing 5% of the total methane emissions. At continental and yearly scales, transport model errors have bigger impacts depending on the region, ranging from 36 Tg CH4 in north America to 7 Tg CH4 in Boreal Eurasian (from 23% to 48%. At the model gridbox scale, the spread of inverse estimates can even reach 150% of the prior flux. Thus, transport model errors contribute to significant uncertainties on the methane estimates by inverse modelling, especially when small spatial scales are invoked. Sensitivity tests have been carried out to estimate the impact of the measurement network and the advantage of higher resolution models. The analysis of methane estimated fluxes in these different configurations questions the consistency of transport model errors in current inverse systems. For future methane inversions, an improvement in the modelling of the atmospheric transport would make the estimations more accurate. Likewise, errors of the observation covariance matrix should be more consistently prescribed in future inversions in order to limit the impact of transport model errors on estimated methane

  20. Efficient Estimation of Non-Linear Dynamic Panel Data Models with Application to Smooth Transition Models

    DEFF Research Database (Denmark)

    Gørgens, Tue; Skeels, Christopher L.; Wurtz, Allan

    This paper explores estimation of a class of non-linear dynamic panel data models with additive unobserved individual-specific effects. The models are specified by moment restrictions. The class includes the panel data AR(p) model and panel smooth transition models. We derive an efficient set of ...... Carlo experiment. We find that estimation of the parameters in the transition function can be problematic but that there may be significant benefits in terms of forecast performance....... of moment restrictions for estimation and apply the results to estimation of panel smooth transition models with fixed effects, where the transition may be determined endogenously. The performance of the GMM estimator, both in terms of estimation precision and forecasting performance, is examined in a Monte...

  1. Statistical models for estimating daily streamflow in Michigan

    Science.gov (United States)

    Holtschlag, D.J.; Salehi, Habib

    1992-01-01

    Statistical models for estimating daily streamflow were analyzed for 25 pairs of streamflow-gaging stations in Michigan. Stations were paired by randomly choosing a station operated in 1989 at which 10 or more years of continuous flow data had been collected and at which flow is virtually unregulated; a nearby station was chosen where flow characteristics are similar. Streamflow data from the 25 randomly selected stations were used as the response variables; streamflow data at the nearby stations were used to generate a set of explanatory variables. Ordinary-least squares regression (OLSR) equations, autoregressive integrated moving-average (ARIMA) equations, and transfer function-noise (TFN) equations were developed to estimate the log transform of flow for the 25 randomly selected stations. The precision of each type of equation was evaluated on the basis of the standard deviation of the estimation errors. OLSR equations produce one set of estimation errors; ARIMA and TFN models each produce l sets of estimation errors corresponding to the forecast lead. The lead-l forecast is the estimate of flow l days ahead of the most recent streamflow used as a response variable in the estimation. In this analysis, the standard deviation of lead l ARIMA and TFN forecast errors were generally lower than the standard deviation of OLSR errors for l weighted average of forecasts based on TFN equations and backcasts (forecasts of the reverse-ordered series) based on ARIMA equations. The standard deviation of composite errors varied throughout the length of the estimation interval and generally was at maximum near the center of the interval. For comparison with OLSR errors, the mean standard deviation of composite errors were computed for intervals of length 1 to 40 days. The mean standard deviation of length-l composite errors were generally less than the standard deviation of the OLSR errors for l error magnitudes were compared by computing ratios of the mean standard deviation

  2. Bayesian Model Averaging of Artificial Intelligence Models for Hydraulic Conductivity Estimation

    Science.gov (United States)

    Nadiri, A.; Chitsazan, N.; Tsai, F. T.; Asghari Moghaddam, A.

    2012-12-01

    This research presents a Bayesian artificial intelligence model averaging (BAIMA) method that incorporates multiple artificial intelligence (AI) models to estimate hydraulic conductivity and evaluate estimation uncertainties. Uncertainty in the AI model outputs stems from error in model input as well as non-uniqueness in selecting different AI methods. Using one single AI model tends to bias the estimation and underestimate uncertainty. BAIMA employs Bayesian model averaging (BMA) technique to address the issue of using one single AI model for estimation. BAIMA estimates hydraulic conductivity by averaging the outputs of AI models according to their model weights. In this study, the model weights were determined using the Bayesian information criterion (BIC) that follows the parsimony principle. BAIMA calculates the within-model variances to account for uncertainty propagation from input data to AI model output. Between-model variances are evaluated to account for uncertainty due to model non-uniqueness. We employed Takagi-Sugeno fuzzy logic (TS-FL), artificial neural network (ANN) and neurofuzzy (NF) to estimate hydraulic conductivity for the Tasuj plain aquifer, Iran. BAIMA combined three AI models and produced better fitting than individual models. While NF was expected to be the best AI model owing to its utilization of both TS-FL and ANN models, the NF model is nearly discarded by the parsimony principle. The TS-FL model and the ANN model showed equal importance although their hydraulic conductivity estimates were quite different. This resulted in significant between-model variances that are normally ignored by using one AI model.

  3. Evaluation of black carbon estimations in global aerosol models

    Directory of Open Access Journals (Sweden)

    Y. Zhao

    2009-11-01

    Full Text Available We evaluate black carbon (BC model predictions from the AeroCom model intercomparison project by considering the diversity among year 2000 model simulations and comparing model predictions with available measurements. These model-measurement intercomparisons include BC surface and aircraft concentrations, aerosol absorption optical depth (AAOD retrievals from AERONET and Ozone Monitoring Instrument (OMI and BC column estimations based on AERONET. In regions other than Asia, most models are biased high compared to surface concentration measurements. However compared with (column AAOD or BC burden retreivals, the models are generally biased low. The average ratio of model to retrieved AAOD is less than 0.7 in South American and 0.6 in African biomass burning regions; both of these regions lack surface concentration measurements. In Asia the average model to observed ratio is 0.7 for AAOD and 0.5 for BC surface concentrations. Compared with aircraft measurements over the Americas at latitudes between 0 and 50N, the average model is a factor of 8 larger than observed, and most models exceed the measured BC standard deviation in the mid to upper troposphere. At higher latitudes the average model to aircraft BC ratio is 0.4 and models underestimate the observed BC loading in the lower and middle troposphere associated with springtime Arctic haze. Low model bias for AAOD but overestimation of surface and upper atmospheric BC concentrations at lower latitudes suggests that most models are underestimating BC absorption and should improve estimates for refractive index, particle size, and optical effects of BC coating. Retrieval uncertainties and/or differences with model diagnostic treatment may also contribute to the model-measurement disparity. Largest AeroCom model diversity occurred in northern Eurasia and the remote Arctic, regions influenced by anthropogenic sources. Changing emissions, aging, removal, or optical properties within a single model

  4. Empirical coverage of model-based variance estimators for remote sensing assisted estimation of stand-level timber volume.

    Science.gov (United States)

    Breidenbach, Johannes; McRoberts, Ronald E; Astrup, Rasmus

    2016-02-01

    Due to the availability of good and reasonably priced auxiliary data, the use of model-based regression-synthetic estimators for small area estimation is popular in operational settings. Examples are forest management inventories, where a linking model is used in combination with airborne laser scanning data to estimate stand-level forest parameters where no or too few observations are collected within the stand. This paper focuses on different approaches to estimating the variances of those estimates. We compared a variance estimator which is based on the estimation of superpopulation parameters with variance estimators which are based on predictions of finite population values. One of the latter variance estimators considered the spatial autocorrelation of the residuals whereas the other one did not. The estimators were applied using timber volume on stand level as the variable of interest and photogrammetric image matching data as auxiliary information. Norwegian National Forest Inventory (NFI) data were used for model calibration and independent data clustered within stands were used for validation. The empirical coverage proportion (ECP) of confidence intervals (CIs) of the variance estimators which are based on predictions of finite population values was considerably higher than the ECP of the CI of the variance estimator which is based on the estimation of superpopulation parameters. The ECP further increased when considering the spatial autocorrelation of the residuals. The study also explores the link between confidence intervals that are based on variance estimates as well as the well-known confidence and prediction intervals of regression models.

  5. Parameter Estimation for Groundwater Models under Uncertain Irrigation Data.

    Science.gov (United States)

    Demissie, Yonas; Valocchi, Albert; Cai, Ximing; Brozovic, Nicholas; Senay, Gabriel; Gebremichael, Mekonnen

    2015-01-01

    The success of modeling groundwater is strongly influenced by the accuracy of the model parameters that are used to characterize the subsurface system. However, the presence of uncertainty and possibly bias in groundwater model source/sink terms may lead to biased estimates of model parameters and model predictions when the standard regression-based inverse modeling techniques are used. This study first quantifies the levels of bias in groundwater model parameters and predictions due to the presence of errors in irrigation data. Then, a new inverse modeling technique called input uncertainty weighted least-squares (IUWLS) is presented for unbiased estimation of the parameters when pumping and other source/sink data are uncertain. The approach uses the concept of generalized least-squares method with the weight of the objective function depending on the level of pumping uncertainty and iteratively adjusted during the parameter optimization process. We have conducted both analytical and numerical experiments, using irrigation pumping data from the Republican River Basin in Nebraska, to evaluate the performance of ordinary least-squares (OLS) and IUWLS calibration methods under different levels of uncertainty of irrigation data and calibration conditions. The result from the OLS method shows the presence of statistically significant (p irrigation pumping uncertainties during the calibration procedures, the proposed IUWLS is able to minimize the bias effectively without adding significant computational burden to the calibration processes.

  6. Parameter estimation for groundwater models under uncertain irrigation data

    Science.gov (United States)

    Demissie, Yonas; Valocchi, Albert J.; Cai, Ximing; Brozovic, Nicholas; Senay, Gabriel; Gebremichael, Mekonnen

    2015-01-01

    The success of modeling groundwater is strongly influenced by the accuracy of the model parameters that are used to characterize the subsurface system. However, the presence of uncertainty and possibly bias in groundwater model source/sink terms may lead to biased estimates of model parameters and model predictions when the standard regression-based inverse modeling techniques are used. This study first quantifies the levels of bias in groundwater model parameters and predictions due to the presence of errors in irrigation data. Then, a new inverse modeling technique called input uncertainty weighted least-squares (IUWLS) is presented for unbiased estimation of the parameters when pumping and other source/sink data are uncertain. The approach uses the concept of generalized least-squares method with the weight of the objective function depending on the level of pumping uncertainty and iteratively adjusted during the parameter optimization process. We have conducted both analytical and numerical experiments, using irrigation pumping data from the Republican River Basin in Nebraska, to evaluate the performance of ordinary least-squares (OLS) and IUWLS calibration methods under different levels of uncertainty of irrigation data and calibration conditions. The result from the OLS method shows the presence of statistically significant (p irrigation pumping uncertainties during the calibration procedures, the proposed IUWLS is able to minimize the bias effectively without adding significant computational burden to the calibration processes.

  7. K factor estimation in distribution transformers using linear regression models

    Directory of Open Access Journals (Sweden)

    Juan Miguel Astorga Gómez

    2016-06-01

    Full Text Available Background: Due to massive incorporation of electronic equipment to distribution systems, distribution transformers are subject to operation conditions other than the design ones, because of the circulation of harmonic currents. It is necessary to quantify the effect produced by these harmonic currents to determine the capacity of the transformer to withstand these new operating conditions. The K-factor is an indicator that estimates the ability of a transformer to withstand the thermal effects caused by harmonic currents. This article presents a linear regression model to estimate the value of the K-factor, from total current harmonic content obtained with low-cost equipment.Method: Two distribution transformers that feed different loads are studied variables, current total harmonic distortion factor K are recorded, and the regression model that best fits the data field is determined. To select the regression model the coefficient of determination R2 and the Akaike Information Criterion (AIC are used. With the selected model, the K-factor is estimated to actual operating conditions.Results: Once determined the model it was found that for both agricultural cargo and industrial mining, present harmonic content (THDi exceeds the values that these transformers can drive (average of 12.54% and minimum 8,90% in the case of agriculture and average value of 18.53% and a minimum of 6.80%, for industrial mining case.Conclusions: When estimating the K factor using polynomial models it was determined that studied transformers can not withstand the current total harmonic distortion of their current loads. The appropriate K factor for studied transformer should be 4; this allows transformers support the current total harmonic distortion of their respective loads.

  8. Evaluation of Black Carbon Estimations in Global Aerosol Models

    Energy Technology Data Exchange (ETDEWEB)

    Koch, D.; Schulz, M.; Kinne, Stefan; McNaughton, C. S.; Spackman, J. R.; Balkanski, Y.; Bauer, S.; Berntsen, T.; Bond, Tami C.; Boucher, Olivier; Chin, M.; Clarke, A. D.; De Luca, N.; Dentener, F.; Diehl, T.; Dubovik, O.; Easter, Richard C.; Fahey, D. W.; Feichter, J.; Fillmore, D.; Freitag, S.; Ghan, Steven J.; Ginoux, P.; Gong, S.; Horowitz, L.; Iversen, T.; Kirkevag, A.; Klimont, Z.; Kondo, Yutaka; Krol, M.; Liu, Xiaohong; Miller, R.; Montanaro, V.; Moteki, N.; Myhre, G.; Penner, J.; Perlwitz, Ja; Pitari, G.; Reddy, S.; Sahu, L.; Sakamoto, H.; Schuster, G.; Schwarz, J. P.; Seland, O.; Stier, P.; Takegawa, Nobuyuki; Takemura, T.; Textor, C.; van Aardenne, John; Zhao, Y.

    2009-11-27

    We evaluate black carbon (BC) model predictions from the AeroCom model intercomparison project by considering the diversity among year 2000 model simulations and comparing model predictions with available measurements. These model-measurement intercomparisons include BC surface and aircraft concentrations, aerosol absorption optical depth (AAOD) from AERONET and OMI retrievals and BC column estimations based on AERONET. In regions other than Asia, most models are biased high compared to surface concentration measurements. However compared with (column) AAOD or BC burden retreivals, the models are generally biased low. The average ratio of model to retrieved AAOD is less than 0.7 in South American and 0.6 in African biomass burning regions; both of these regions lack surface concentration measurements. In Asia the average model to observed ratio is 0.6 for AAOD and 0.5 for BC surface concentrations. Compared with aircraft measurements over the Americas at latitudes between 0 and 50N, the average model is a factor of 10 larger than observed, and most models exceed the measured BC standard deviation in the mid to upper troposphere. At higher latitudes the average model to aircraft BC is 0.6 and underestimate the observed BC loading in the lower and middle troposphere associated with springtime Arctic haze. Low model bias for AAOD but overestimation of surface and upper atmospheric BC concentrations at lower latitudes suggests that most models are underestimating BC absorption and should improve estimates for refractive index, particle size, and optical effects of BC coating. Retrieval uncertainties and/or differences with model diagnostic treatment may also contribute to the model-measurement disparity. Largest AeroCom model diversity occurred in northern Eurasia and the remote Arctic, regions influenced by anthropogenic sources. Changing emissions, aging, removal, or optical properties within a single model generated a smaller change in model predictions than the

  9. Evaluation of black carbon estimations in global aerosol models

    Directory of Open Access Journals (Sweden)

    D. Koch

    2009-07-01

    Full Text Available We evaluate black carbon (BC model predictions from the AeroCom model intercomparison project by considering the diversity among year 2000 model simulations and comparing model predictions with available measurements. These model-measurement intercomparisons include BC surface and aircraft concentrations, aerosol absorption optical depth (AAOD from AERONET and Ozone Monitoring Instrument (OMI retrievals and BC column estimations based on AERONET. In regions other than Asia, most models are biased high compared to surface concentration measurements. However compared with (column AAOD or BC burden retreivals, the models are generally biased low. The average ratio of model to retrieved AAOD is less than 0.7 in South American and 0.6 in African biomass burning regions; both of these regions lack surface concentration measurements. In Asia the average model to observed ratio is 0.6 for AAOD and 0.5 for BC surface concentrations. Compared with aircraft measurements over the Americas at latitudes between 0 and 50 N, the average model is a factor of 10 larger than observed, and most models exceed the measured BC standard deviation in the mid to upper troposphere. At higher latitudes the average model to aircraft BC is 0.6 and underestimates the observed BC loading in the lower and middle troposphere associated with springtime Arctic haze. Low model bias for AAOD but overestimation of surface and upper atmospheric BC concentrations at lower latitudes suggests that most models are underestimating BC absorption and should improve estimates for refractive index, particle size, and optical effects of BC coating. Retrieval uncertainties and/or differences with model diagnostic treatment may also contribute to the model-measurement disparity. Largest AeroCom model diversity occurred in northern Eurasia and the remote Arctic, regions influenced by anthropogenic sources. Changing emissions, aging, removal, or optical properties within a single model generated a

  10. Singularity of Some Software Reliability Models and Parameter Estimation Method

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    According to the principle, “The failure data is the basis of software reliability analysis”, we built a software reliability expert system (SRES) by adopting the artificial intelligence technology. By reasoning out the conclusion from the fitting results of failure data of a software project, the SRES can recommend users “the most suitable model” as a software reliability measurement model. We believe that the SRES can overcome the inconsistency in applications of software reliability models well. We report investigation results of singularity and parameter estimation methods of experimental models in SRES.

  11. Hidden Markov Modeling for Weigh-In-Motion Estimation

    Energy Technology Data Exchange (ETDEWEB)

    Abercrombie, Robert K [ORNL; Ferragut, Erik M [ORNL; Boone, Shane [ORNL

    2012-01-01

    This paper describes a hidden Markov model to assist in the weight measurement error that arises from complex vehicle oscillations of a system of discrete masses. Present reduction of oscillations is by a smooth, flat, level approach and constant, slow speed in a straight line. The model uses this inherent variability to assist in determining the true total weight and individual axle weights of a vehicle. The weight distribution dynamics of a generic moving vehicle were simulated. The model estimation converged to within 1% of the true mass for simulated data. The computational demands of this method, while much greater than simple averages, took only seconds to run on a desktop computer.

  12. Sensorless position estimator applied to nonlinear IPMC model

    Science.gov (United States)

    Bernat, Jakub; Kolota, Jakub

    2016-11-01

    This paper addresses the issue of estimating position for an ionic polymer metal composite (IPMC) known as electro active polymer (EAP). The key step is the construction of a sensorless mode considering only current feedback. This work takes into account nonlinearities caused by electrochemical effects in the material. Owing to the recent observer design technique, the authors obtained both Lyapunov function based estimation law as well as sliding mode observer. To accomplish the observer design, the IPMC model was identified through a series of experiments. The research comprises time domain measurements. The identification process was completed by means of geometric scaling of three test samples. In the proposed design, the estimated position accurately tracks the polymer position, which is illustrated by the experiments.

  13. Model Year 2016 Fuel Economy Guide: EPA Fuel Economy Estimates

    Energy Technology Data Exchange (ETDEWEB)

    None

    2015-11-01

    The Fuel Economy Guide is published by the U.S. Department of Energy as an aid to consumers considering the purchase of a new vehicle. The Guide lists estimates of miles per gallon (mpg) for each vehicle available for the new model year. These estimates are provided by the U.S. Environmental Protection Agency in compliance with Federal Law. By using this Guide, consumers can estimate the average yearly fuel cost for any vehicle. The Guide is intended to help consumers compare the fuel economy of similarly sized cars, light duty trucks and special purpose vehicles. The vehicles listed have been divided into three classes of cars, three classes of light duty trucks, and three classes of special purpose vehicles.

  14. Model Year 2005 Fuel Economy Guide: EPA Fuel Economy Estimates

    Energy Technology Data Exchange (ETDEWEB)

    None

    2004-11-01

    The Fuel Economy Guide is published by the U.S. Department of Energy as an aid to consumers considering the purchase of a new vehicle. The Guide lists estimates of miles per gallon (mpg) for each vehicle available for the new model year. These estimates are provided by the U.S. Environmental Protection Agency in compliance with Federal Law. By using this Guide, consumers can estimate the average yearly fuel cost for any vehicle. The Guide is intended to help consumers compare the fuel economy of similarly sized cars, light duty trucks and special purpose vehicles. The vehicles listed have been divided into three classes of cars, three classes of light duty trucks, and three classes of special purpose vehicles.

  15. Nonparametric Estimation of Distributions in Random Effects Models

    KAUST Repository

    Hart, Jeffrey D.

    2011-01-01

    We propose using minimum distance to obtain nonparametric estimates of the distributions of components in random effects models. A main setting considered is equivalent to having a large number of small datasets whose locations, and perhaps scales, vary randomly, but which otherwise have a common distribution. Interest focuses on estimating the distribution that is common to all datasets, knowledge of which is crucial in multiple testing problems where a location/scale invariant test is applied to every small dataset. A detailed algorithm for computing minimum distance estimates is proposed, and the usefulness of our methodology is illustrated by a simulation study and an analysis of microarray data. Supplemental materials for the article, including R-code and a dataset, are available online. © 2011 American Statistical Association.

  16. Model Year 2008 Fuel Economy Guide: EPA Fuel Economy Estimates

    Energy Technology Data Exchange (ETDEWEB)

    None

    2007-10-01

    The Fuel Economy Guide is published by the U.S. Department of Energy as an aid to consumers considering the purchase of a new vehicle. The Guide lists estimates of miles per gallon (mpg) for each vehicle available for the new model year. These estimates are provided by the U.S. Environmental Protection Agency in compliance with Federal Law. By using this Guide, consumers can estimate the average yearly fuel cost for any vehicle. The Guide is intended to help consumers compare the fuel economy of similarly sized cars, light duty trucks and special purpose vehicles. The vehicles listed have been divided into three classes of cars, three classes of light duty trucks, and three classes of special purpose vehicles.

  17. Model Year 2009 Fuel Economy Guide: EPA Fuel Economy Estimates

    Energy Technology Data Exchange (ETDEWEB)

    None

    2008-10-01

    The Fuel Economy Guide is published by the U.S. Department of Energy as an aid to consumers considering the purchase of a new vehicle. The Guide lists estimates of miles per gallon (mpg) for each vehicle available for the new model year. These estimates are provided by the U.S. Environmental Protection Agency in compliance with Federal Law. By using this Guide, consumers can estimate the average yearly fuel cost for any vehicle. The Guide is intended to help consumers compare the fuel economy of similarly sized cars, light duty trucks and special purpose vehicles. The vehicles listed have been divided into three classes of cars, three classes of light duty trucks, and three classes of special purpose vehicles.

  18. Model Year 2007 Fuel Economy Guide: EPA Fuel Economy Estimates

    Energy Technology Data Exchange (ETDEWEB)

    None

    2007-10-01

    The Fuel Economy Guide is published by the U.S. Department of Energy as an aid to consumers considering the purchase of a new vehicle. The Guide lists estimates of miles per gallon (mpg) for each vehicle available for the new model year. These estimates are provided by the U.S. Environmental Protection Agency in compliance with Federal Law. By using this Guide, consumers can estimate the average yearly fuel cost for any vehicle. The Guide is intended to help consumers compare the fuel economy of similarly sized cars, light duty trucks and special purpose vehicles. The vehicles listed have been divided into three classes of cars, three classes of light duty trucks, and three classes of special purpose vehicles.

  19. Model Year 2006 Fuel Economy Guide: EPA Fuel Economy Estimates

    Energy Technology Data Exchange (ETDEWEB)

    None

    2005-11-01

    The Fuel Economy Guide is published by the U.S. Department of Energy as an aid to consumers considering the purchase of a new vehicle. The Guide lists estimates of miles per gallon (mpg) for each vehicle available for the new model year. These estimates are provided by the U.S. Environmental Protection Agency in compliance with Federal Law. By using this Guide, consumers can estimate the average yearly fuel cost for any vehicle. The Guide is intended to help consumers compare the fuel economy of similarly sized cars, light duty trucks and special purpose vehicles. The vehicles listed have been divided into three classes of cars, three classes of light duty trucks, and three classes of special purpose vehicles.

  20. Model Year 2015 Fuel Economy Guide: EPA Fuel Economy Estimates

    Energy Technology Data Exchange (ETDEWEB)

    None

    2014-12-01

    The Fuel Economy Guide is published by the U.S. Department of Energy as an aid to consumers considering the purchase of a new vehicle. The Guide lists estimates of miles per gallon (mpg) for each vehicle available for the new model year. These estimates are provided by the U.S. Environmental Protection Agency in compliance with Federal Law. By using this Guide, consumers can estimate the average yearly fuel cost for any vehicle. The Guide is intended to help consumers compare the fuel economy of similarly sized cars, light duty trucks and special purpose vehicles. The vehicles listed have been divided into three classes of cars, three classes of light duty trucks, and three classes of special purpose vehicles.

  1. Model Year 2010 Fuel Economy Guide: EPA Fuel Economy Estimates

    Energy Technology Data Exchange (ETDEWEB)

    None

    2009-10-14

    The Fuel Economy Guide is published by the U.S. Department of Energy as an aid to consumers considering the purchase of a new vehicle. The Guide lists estimates of miles per gallon (mpg) for each vehicle available for the new model year. These estimates are provided by the U.S. Environmental Protection Agency in compliance with Federal Law. By using this Guide, consumers can estimate the average yearly fuel cost for any vehicle. The Guide is intended to help consumers compare the fuel economy of similarly sized cars, light duty trucks and special purpose vehicles. The vehicles listed have been divided into three classes of cars, three classes of light duty trucks, and three classes of special purpose vehicles.

  2. Model Year 2014 Fuel Economy Guide: EPA Fuel Economy Estimates

    Energy Technology Data Exchange (ETDEWEB)

    None

    2013-12-01

    The Fuel Economy Guide is published by the U.S. Department of Energy as an aid to consumers considering the purchase of a new vehicle. The Guide lists estimates of miles per gallon (mpg) for each vehicle available for the new model year. These estimates are provided by the U.S. Environmental Protection Agency in compliance with Federal Law. By using this Guide, consumers can estimate the average yearly fuel cost for any vehicle. The Guide is intended to help consumers compare the fuel economy of similarly sized cars, light duty trucks and special purpose vehicles. The vehicles listed have been divided into three classes of cars, three classes of light duty trucks, and three classes of special purpose vehicles.

  3. Modeling of Closed-Die Forging for Estimating Forging Load

    Science.gov (United States)

    Sheth, Debashish; Das, Santanu; Chatterjee, Avik; Bhattacharya, Anirban

    2017-02-01

    Closed die forging is one common metal forming process used for making a range of products. Enough load is to exert on the billet for deforming the material. This forging load is dependent on work material property and frictional characteristics of the work material with the punch and die. Several researchers worked on estimation of forging load for specific products under different process variables. Experimental data on deformation resistance and friction were used to calculate the load. In this work, theoretical estimation of forging load is made to compare this value with that obtained through LS-DYNA model facilitating the finite element analysis. Theoretical work uses slab method to assess forging load for an axi-symmetric upsetting job made of lead. Theoretical forging load estimate shows slightly higher value than the experimental one; however, simulation shows quite close matching with experimental forging load, indicating possibility of wide use of this simulation software.

  4. Parameter and uncertainty estimation for mechanistic, spatially explicit epidemiological models

    Science.gov (United States)

    Finger, Flavio; Schaefli, Bettina; Bertuzzo, Enrico; Mari, Lorenzo; Rinaldo, Andrea

    2014-05-01

    Epidemiological models can be a crucially important tool for decision-making during disease outbreaks. The range of possible applications spans from real-time forecasting and allocation of health-care resources to testing alternative intervention mechanisms such as vaccines, antibiotics or the improvement of sanitary conditions. Our spatially explicit, mechanistic models for cholera epidemics have been successfully applied to several epidemics including, the one that struck Haiti in late 2010 and is still ongoing. Calibration and parameter estimation of such models represents a major challenge because of properties unusual in traditional geoscientific domains such as hydrology. Firstly, the epidemiological data available might be subject to high uncertainties due to error-prone diagnosis as well as manual (and possibly incomplete) data collection. Secondly, long-term time-series of epidemiological data are often unavailable. Finally, the spatially explicit character of the models requires the comparison of several time-series of model outputs with their real-world counterparts, which calls for an appropriate weighting scheme. It follows that the usual assumption of a homoscedastic Gaussian error distribution, used in combination with classical calibration techniques based on Markov chain Monte Carlo algorithms, is likely to be violated, whereas the construction of an appropriate formal likelihood function seems close to impossible. Alternative calibration methods, which allow for accurate estimation of total model uncertainty, particularly regarding the envisaged use of the models for decision-making, are thus needed. Here we present the most recent developments regarding methods for parameter and uncertainty estimation to be used with our mechanistic, spatially explicit models for cholera epidemics, based on informal measures of goodness of fit.

  5. Estimation of Effectivty Connectivity via Data-Driven Neural Modeling

    Directory of Open Access Journals (Sweden)

    Dean Robert Freestone

    2014-11-01

    Full Text Available This research introduces a new method for functional brain imaging via a process of model inversion. By estimating parameters of a computational model, we are able to track effective connectivity and mean membrane potential dynamics that cannot be directly measured using electrophysiological measurements alone. The ability to track the hidden aspects of neurophysiology will have a profound impact on the way we understand and treat epilepsy. For example, under the assumption the model captures the key features of the cortical circuits of interest, the framework will provide insights into seizure initiation and termination on a patient-specific basis. It will enable investigation into the effect a particular drug has on specific neural populations and connectivity structures using minimally invasive measurements. The method is based on approximating brain networks using an interconnected neural population model. The neural population model is based on a neural mass model that describes the functional activity of the brain, capturing the mesoscopic biophysics and anatomical structure. The model is made subject-specific by estimating the strength of intra-cortical connections within a region and inter-cortical connections between regions using a novel Kalman filtering method. We demonstrate through simulation how the framework can be used the track the mechanisms involved in seizure initiation and termination.

  6. Spatially random models, estimation theory, and robot arm dynamics

    Science.gov (United States)

    Rodriguez, G.

    1987-01-01

    Spatially random models provide an alternative to the more traditional deterministic models used to describe robot arm dynamics. These alternative models can be used to establish a relationship between the methodologies of estimation theory and robot dynamics. A new class of algorithms for many of the fundamental robotics problems of inverse and forward dynamics, inverse kinematics, etc. can be developed that use computations typical in estimation theory. The algorithms make extensive use of the difference equations of Kalman filtering and Bryson-Frazier smoothing to conduct spatial recursions. The spatially random models are very easy to describe and are based on the assumption that all of the inertial (D'Alembert) forces in the system are represented by a spatially distributed white-noise model. The models can also be used to generate numerically the composite multibody system inertia matrix. This is done without resorting to the more common methods of deterministic modeling involving Lagrangian dynamics, Newton-Euler equations, etc. These methods make substantial use of human knowledge in derivation and minipulation of equations of motion for complex mechanical systems.

  7. Remaining lifetime modeling using State-of-Health estimation

    Science.gov (United States)

    Beganovic, Nejra; Söffker, Dirk

    2017-08-01

    Technical systems and system's components undergo gradual degradation over time. Continuous degradation occurred in system is reflected in decreased system's reliability and unavoidably lead to a system failure. Therefore, continuous evaluation of State-of-Health (SoH) is inevitable to provide at least predefined lifetime of the system defined by manufacturer, or even better, to extend the lifetime given by manufacturer. However, precondition for lifetime extension is accurate estimation of SoH as well as the estimation and prediction of Remaining Useful Lifetime (RUL). For this purpose, lifetime models describing the relation between system/component degradation and consumed lifetime have to be established. In this contribution modeling and selection of suitable lifetime models from database based on current SoH conditions are discussed. Main contribution of this paper is the development of new modeling strategies capable to describe complex relations between measurable system variables, related system degradation, and RUL. Two approaches with accompanying advantages and disadvantages are introduced and compared. Both approaches are capable to model stochastic aging processes of a system by simultaneous adaption of RUL models to current SoH. The first approach requires a priori knowledge about aging processes in the system and accurate estimation of SoH. An estimation of SoH here is conditioned by tracking actual accumulated damage into the system, so that particular model parameters are defined according to a priori known assumptions about system's aging. Prediction accuracy in this case is highly dependent on accurate estimation of SoH but includes high number of degrees of freedom. The second approach in this contribution does not require a priori knowledge about system's aging as particular model parameters are defined in accordance to multi-objective optimization procedure. Prediction accuracy of this model does not highly depend on estimated SoH. This model

  8. Forward models and state estimation in compensatory eye movements

    Directory of Open Access Journals (Sweden)

    Maarten A Frens

    2009-11-01

    Full Text Available The compensatory eye movement system maintains a stable retinal image, integrating information from different sensory modalities to compensate for head movements. Inspired by recent models of physiology of limb movements, we suggest that compensatory eye movements (CEM can be modeled as a control system with three essential building blocks: a forward model that predicts the effects of motor commands; a state estimator that integrates sensory feedback into this prediction; and, a feedback controller that translates a state estimate into motor commands. We propose a specific mapping of nuclei within the CEM system onto these control functions. Specifically, we suggest that the Flocculus is responsible for generating the forward model prediction and that the Vestibular Nuclei integrate sensory feedback to generate an estimate of current state. Finally, the brainstem motor nuclei – in the case of horizontal compensation this means the Abducens Nucleus and the Nucleus Prepositus Hypoglossi – implement a feedback controller, translating state into motor commands. While these efforts to understand the physiological control system as a feedback control system are in their infancy, there is the intriguing possibility that compensatory eye movements and targeted voluntary movements use the same cerebellar circuitry in fundamentally different ways.

  9. Urban scale air quality modelling using detailed traffic emissions estimates

    Science.gov (United States)

    Borrego, C.; Amorim, J. H.; Tchepel, O.; Dias, D.; Rafael, S.; Sá, E.; Pimentel, C.; Fontes, T.; Fernandes, P.; Pereira, S. R.; Bandeira, J. M.; Coelho, M. C.

    2016-04-01

    The atmospheric dispersion of NOx and PM10 was simulated with a second generation Gaussian model over a medium-size south-European city. Microscopic traffic models calibrated with GPS data were used to derive typical driving cycles for each road link, while instantaneous emissions were estimated applying a combined Vehicle Specific Power/Co-operative Programme for Monitoring and Evaluation of the Long-range Transmission of Air Pollutants in Europe (VSP/EMEP) methodology. Site-specific background concentrations were estimated using time series analysis and a low-pass filter applied to local observations. Air quality modelling results are compared against measurements at two locations for a 1 week period. 78% of the results are within a factor of two of the observations for 1-h average concentrations, increasing to 94% for daily averages. Correlation significantly improves when background is added, with an average of 0.89 for the 24 h record. The results highlight the potential of detailed traffic and instantaneous exhaust emissions estimates, together with filtered urban background, to provide accurate input data to Gaussian models applied at the urban scale.

  10. Time-to-Compromise Model for Cyber Risk Reduction Estimation

    Energy Technology Data Exchange (ETDEWEB)

    Miles A. McQueen; Wayne F. Boyer; Mark A. Flynn; George A. Beitel

    2005-09-01

    We propose a new model for estimating the time to compromise a system component that is visible to an attacker. The model provides an estimate of the expected value of the time-to-compromise as a function of known and visible vulnerabilities, and attacker skill level. The time-to-compromise random process model is a composite of three subprocesses associated with attacker actions aimed at the exploitation of vulnerabilities. In a case study, the model was used to aid in a risk reduction estimate between a baseline Supervisory Control and Data Acquisition (SCADA) system and the baseline system enhanced through a specific set of control system security remedial actions. For our case study, the total number of system vulnerabilities was reduced by 86% but the dominant attack path was through a component where the number of vulnerabilities was reduced by only 42% and the time-to-compromise of that component was increased by only 13% to 30% depending on attacker skill level.

  11. The Variance of Energy Estimates for the Product Model

    Directory of Open Access Journals (Sweden)

    David Smallwood

    2003-01-01

    , is the product of a slowly varying random window, {w(t}, and a stationary random process, {g(t}, is defined. A single realization of the process will be defined as x(t. This is slightly different from the usual definition of the product model where the window is typically defined as deterministic. An estimate of the energy (the zero order temporal moment, only in special cases is this physical energy of the random process, {x(t}, is defined as m0=∫∞∞|x(t|2dt=∫−∞∞|w(tg(t|2dt Relationships for the mean and variance of the energy estimates, m0, are then developed. It is shown that for many cases the uncertainty (4π times the product of rms duration, Dt, and rms bandwidth, Df is approximately the inverse of the normalized variance of the energy. The uncertainty is a quantitative measure of the expected error in the energy estimate. If a transient has a significant random component, a small uncertainty parameter implies large error in the energy estimate. Attempts to resolve a time/frequency spectrum near the uncertainty limits of a transient with a significant random component will result in large errors in the spectral estimates.

  12. Functional response models to estimate feeding rates of wading birds

    Science.gov (United States)

    Collazo, J.A.; Gilliam, J.F.; Miranda-Castro, L.

    2010-01-01

    Forager (predator) abundance may mediate feeding rates in wading birds. Yet, when modeled, feeding rates are typically derived from the purely prey-dependent Holling Type II (HoII) functional response model. Estimates of feeding rates are necessary to evaluate wading bird foraging strategies and their role in food webs; thus, models that incorporate predator dependence warrant consideration. Here, data collected in a mangrove swamp in Puerto Rico in 1994 were reanalyzed, reporting feeding rates for mixed-species flocks after comparing fits of the HoII model, as used in the original work, to the Beddington-DeAngelis (BD) and Crowley-Martin (CM) predator-dependent models. Model CM received most support (AIC c wi = 0.44), but models BD and HoII were plausible alternatives (AIC c ??? 2). Results suggested that feeding rates were constrained by predator abundance. Reductions in rates were attributed to interference, which was consistent with the independently observed increase in aggression as flock size increased (P rates. However, inferences derived from the HoII model, as used in the original work, were sound. While Holling's Type II and other purely prey-dependent models have fostered advances in wading bird foraging ecology, evaluating models that incorporate predator dependence could lead to a more adequate description of data and processes of interest. The mechanistic bases used to derive models used here lead to biologically interpretable results and advance understanding of wading bird foraging ecology.

  13. Neural Net Gains Estimation Based on an Equivalent Model

    Directory of Open Access Journals (Sweden)

    Karen Alicia Aguilar Cruz

    2016-01-01

    Full Text Available A model of an Equivalent Artificial Neural Net (EANN describes the gains set, viewed as parameters in a layer, and this consideration is a reproducible process, applicable to a neuron in a neural net (NN. The EANN helps to estimate the NN gains or parameters, so we propose two methods to determine them. The first considers a fuzzy inference combined with the traditional Kalman filter, obtaining the equivalent model and estimating in a fuzzy sense the gains matrix A and the proper gain K into the traditional filter identification. The second develops a direct estimation in state space, describing an EANN using the expected value and the recursive description of the gains estimation. Finally, a comparison of both descriptions is performed; highlighting the analytical method describes the neural net coefficients in a direct form, whereas the other technique requires selecting into the Knowledge Base (KB the factors based on the functional error and the reference signal built with the past information of the system.

  14. Neural Net Gains Estimation Based on an Equivalent Model

    Science.gov (United States)

    Aguilar Cruz, Karen Alicia; Medel Juárez, José de Jesús; Fernández Muñoz, José Luis; Esmeralda Vigueras Velázquez, Midory

    2016-01-01

    A model of an Equivalent Artificial Neural Net (EANN) describes the gains set, viewed as parameters in a layer, and this consideration is a reproducible process, applicable to a neuron in a neural net (NN). The EANN helps to estimate the NN gains or parameters, so we propose two methods to determine them. The first considers a fuzzy inference combined with the traditional Kalman filter, obtaining the equivalent model and estimating in a fuzzy sense the gains matrix A and the proper gain K into the traditional filter identification. The second develops a direct estimation in state space, describing an EANN using the expected value and the recursive description of the gains estimation. Finally, a comparison of both descriptions is performed; highlighting the analytical method describes the neural net coefficients in a direct form, whereas the other technique requires selecting into the Knowledge Base (KB) the factors based on the functional error and the reference signal built with the past information of the system. PMID:27366146

  15. Macroeconomic Forecasts in Models with Bayesian Averaging of Classical Estimates

    Directory of Open Access Journals (Sweden)

    Piotr Białowolski

    2012-03-01

    Full Text Available The aim of this paper is to construct a forecasting model oriented on predicting basic macroeconomic variables, namely: the GDP growth rate, the unemployment rate, and the consumer price inflation. In order to select the set of the best regressors, Bayesian Averaging of Classical Estimators (BACE is employed. The models are atheoretical (i.e. they do not reflect causal relationships postulated by the macroeconomic theory and the role of regressors is played by business and consumer tendency survey-based indicators. Additionally, survey-based indicators are included with a lag that enables to forecast the variables of interest (GDP, unemployment, and inflation for the four forthcoming quarters without the need to make any additional assumptions concerning the values of predictor variables in the forecast period.  Bayesian Averaging of Classical Estimators is a method allowing for full and controlled overview of all econometric models which can be obtained out of a particular set of regressors. In this paper authors describe the method of generating a family of econometric models and the procedure for selection of a final forecasting model. Verification of the procedure is performed by means of out-of-sample forecasts of main economic variables for the quarters of 2011. The accuracy of the forecasts implies that there is still a need to search for new solutions in the atheoretical modelling.

  16. Propagation channel characterization, parameter estimation, and modeling for wireless communications

    CERN Document Server

    Yin, Xuefeng

    2016-01-01

    Thoroughly covering channel characteristics and parameters, this book provides the knowledge needed to design various wireless systems, such as cellular communication systems, RFID and ad hoc wireless communication systems. It gives a detailed introduction to aspects of channels before presenting the novel estimation and modelling techniques which can be used to achieve accurate models. To systematically guide readers through the topic, the book is organised in three distinct parts. The first part covers the fundamentals of the characterization of propagation channels, including the conventional single-input single-output (SISO) propagation channel characterization as well as its extension to multiple-input multiple-output (MIMO) cases. Part two focuses on channel measurements and channel data post-processing. Wideband channel measurements are introduced, including the equipment, technology and advantages and disadvantages of different data acquisition schemes. The channel parameter estimation methods are ...

  17. Modelling, Estimation and Control of Networked Complex Systems

    CERN Document Server

    Chiuso, Alessandro; Frasca, Mattia; Rizzo, Alessandro; Schenato, Luca; Zampieri, Sandro

    2009-01-01

    The paradigm of complexity is pervading both science and engineering, leading to the emergence of novel approaches oriented at the development of a systemic view of the phenomena under study; the definition of powerful tools for modelling, estimation, and control; and the cross-fertilization of different disciplines and approaches. This book is devoted to networked systems which are one of the most promising paradigms of complexity. It is demonstrated that complex, dynamical networks are powerful tools to model, estimate, and control many interesting phenomena, like agent coordination, synchronization, social and economics events, networks of critical infrastructures, resources allocation, information processing, or control over communication networks. Moreover, it is shown how the recent technological advances in wireless communication and decreasing in cost and size of electronic devices are promoting the appearance of large inexpensive interconnected systems, each with computational, sensing and mobile cap...

  18. Robust Bayesian Regularized Estimation Based on t Regression Model

    Directory of Open Access Journals (Sweden)

    Zean Li

    2015-01-01

    Full Text Available The t distribution is a useful extension of the normal distribution, which can be used for statistical modeling of data sets with heavy tails, and provides robust estimation. In this paper, in view of the advantages of Bayesian analysis, we propose a new robust coefficient estimation and variable selection method based on Bayesian adaptive Lasso t regression. A Gibbs sampler is developed based on the Bayesian hierarchical model framework, where we treat the t distribution as a mixture of normal and gamma distributions and put different penalization parameters for different regression coefficients. We also consider the Bayesian t regression with adaptive group Lasso and obtain the Gibbs sampler from the posterior distributions. Both simulation studies and real data example show that our method performs well compared with other existing methods when the error distribution has heavy tails and/or outliers.

  19. Parameter Estimation of Photovoltaic Models via Cuckoo Search

    Directory of Open Access Journals (Sweden)

    Jieming Ma

    2013-01-01

    Full Text Available Since conventional methods are incapable of estimating the parameters of Photovoltaic (PV models with high accuracy, bioinspired algorithms have attracted significant attention in the last decade. Cuckoo Search (CS is invented based on the inspiration of brood parasitic behavior of some cuckoo species in combination with the Lévy flight behavior. In this paper, a CS-based parameter estimation method is proposed to extract the parameters of single-diode models for commercial PV generators. Simulation results and experimental data show that the CS algorithm is capable of obtaining all the parameters with extremely high accuracy, depicted by a low Root-Mean-Squared-Error (RMSE value. The proposed method outperforms other algorithms applied in this study.

  20. Estimation of the parameters of ETAS models by Simulated Annealing

    OpenAIRE

    Lombardi, Anna Maria

    2015-01-01

    This paper proposes a new algorithm to estimate the maximum likelihood parameters of an Epidemic Type Aftershock Sequences (ETAS) model. It is based on Simulated Annealing, a versatile method that solves problems of global optimization and ensures convergence to a global optimum. The procedure is tested on both simulated and real catalogs. The main conclusion is that the method performs poorly as the size of the catalog decreases because the effect of the correlation of the ETAS parameters is...

  1. Models of Labour Services and Estimates of Total Factor Productivity

    OpenAIRE

    Robert Dixon; David Shepherd

    2007-01-01

    This paper examines the manner in which labour services are modelled in the aggregate production function, concentrating on the relationship between numbers employed and average hours worked. It argues that numbers employed and hours worked are not perfect substitutes and that conventional estimates of total factor productivity which, by using total hours worked as the measure of labour services, assume they are perfect substitutes, will be biased when there are marked changes in average hour...

  2. CADLIVE optimizer: web-based parameter estimation for dynamic models

    Directory of Open Access Journals (Sweden)

    Inoue Kentaro

    2012-08-01

    Full Text Available Abstract Computer simulation has been an important technique to capture the dynamics of biochemical networks. In most networks, however, few kinetic parameters have been measured in vivo because of experimental complexity. We develop a kinetic parameter estimation system, named the CADLIVE Optimizer, which comprises genetic algorithms-based solvers with a graphical user interface. This optimizer is integrated into the CADLIVE Dynamic Simulator to attain efficient simulation for dynamic models.

  3. Complex source rate estimation for atmospheric transport and dispersion models

    Energy Technology Data Exchange (ETDEWEB)

    Edwards, L.L.

    1993-09-13

    The accuracy associated with assessing the environmental consequences of an accidental atmospheric release of radioactivity is highly dependent on our knowledge of the source release rate which is generally poorly known. This paper reports on a technique that integrates the radiological measurements with atmospheric dispersion modeling for more accurate source term estimation. We construct a minimum least squares methodology for solving the inverse problem with no a priori information about the source rate.

  4. Comparison of Parameter Estimation Methods for Transformer Weibull Lifetime Modelling

    Institute of Scientific and Technical Information of China (English)

    ZHOU Dan; LI Chengrong; WANG Zhongdong

    2013-01-01

    Two-parameter Weibull distribution is the most widely adopted lifetime model for power transformers.An appropriate parameter estimation method is essential to guarantee the accuracy of a derived Weibull lifetime model.Six popular parameter estimation methods (i.e.the maximum likelihood estimation method,two median rank regression methods including the one regressing X on Y and the other one regressing Y on X,the Kaplan-Meier method,the method based on cumulative hazard plot,and the Li's method) are reviewed and compared in order to find the optimal one that suits transformer's Weibull lifetime modelling.The comparison took several different scenarios into consideration:10 000 sets of lifetime data,each of which had a sampling size of 40 ~ 1 000 and a censoring rate of 90%,were obtained by Monte-Carlo simulations for each scienario.Scale and shape parameters of Weibull distribution estimated by the six methods,as well as their mean value,median value and 90% confidence band are obtained.The cross comparison of these results reveals that,among the six methods,the maximum likelihood method is the best one,since it could provide the most accurate Weibull parameters,i.e.parameters having the smallest bias in both mean and median values,as well as the shortest length of the 90% confidence band.The maximum likelihood method is therefore recommended to be used over the other methods in transformer Weibull lifetime modelling.

  5. MATHEMATICAL MODEL FOR ESTIMATION OF MECHANICAL SYSTEM CONDITION IN DYNAMICS

    Directory of Open Access Journals (Sweden)

    D. N. Mironov

    2011-01-01

    Full Text Available The paper considers an estimation of a complicated mechanical system condition in dynamics with due account of material degradation and accumulation of micro-damages. An element of continuous medium has been simulated and described with the help of a discrete element. The paper contains description of a model for determination of mechanical system longevity in accordance with number of cycles and operational period.

  6. A new model for estimating boreal forest fPAR

    Science.gov (United States)

    Majasalmi, Titta; Rautiainen, Miina; Stenberg, Pauline

    2014-05-01

    Life on Earth is continuously sustained by the extraterrestrial flux of photosynthetically active radiation (PAR, 400-700 nm) from the sun. This flux is converted to biomass by chloroplasts in green vegetation. Thus, the fraction of absorbed PAR (fPAR) is a key parameter used in carbon balance studies, and is listed as one of the Essential Climate Variables (ECV). Temporal courses of fPAR for boreal forests are difficult to measure, because of the complex 3D structures. Thus, they are most often estimated based on models which quantify the dependency of absorbed radiation on canopy structure. In this study, we adapted a physically-based canopy radiation model into a fPAR model, and compared modeled and measured fPAR in structurally different boreal forest stands. The model is based on the spectral invariants theory, and uses leaf area index (LAI), canopy gap fractions and spectra of foliage and understory as input data. The model differs from previously developed more detailed fPAR models in that the complex 3D structure of coniferous forests is described using an aggregated canopy parameter - photon recollision probability p. The strength of the model is that all model inputs are measurable or available through other simple models. First, the model was validated with measurements of instantaneous fPAR obtained with the TRAC instrument in nine Scots pine, Norway spruce and Silver birch stands in a boreal forest in southern Finland. Good agreement was found between modeled and measured fPAR. Next, we applied the model to predict temporal courses of fPAR using data on incoming radiation from a nearby flux tower and sky irradiance models. Application of the model to simulate diurnal and seasonal values of fPAR indicated that the ratio of direct-to-total incident radiation and leaf area index are the key factors behind the magnitude and variation of stand-level fPAR values.

  7. Estimating the Multilevel Rasch Model: With the lme4 Package

    Directory of Open Access Journals (Sweden)

    Harold Doran

    2007-02-01

    Full Text Available Traditional Rasch estimation of the item and student parameters via marginal maximum likelihood, joint maximum likelihood or conditional maximum likelihood, assume individuals in clustered settings are uncorrelated and items within a test that share a grouping structure are also uncorrelated. These assumptions are often violated, particularly in educational testing situations, in which students are grouped into classrooms and many test items share a common grouping structure, such as a content strand or a reading passage. Consequently, one possible approach is to explicitly recognize the clustered nature of the data and directly incorporate random effects to account for the various dependencies. This article demonstrates how the multilevel Rasch model can be estimated using the functions in R for mixed-effects models with crossed or partially crossed random effects. We demonstrate how to model the following hierarchical data structures: a individuals clustered in similar settings (e.g., classrooms, schools, b items nested within a particular group (such as a content strand or a reading passage, and c how to estimate a teacher × content strand interaction.

  8. The MSFC Solar Activity Future Estimation (MSAFE) Model

    Science.gov (United States)

    Suggs, Ron

    2017-01-01

    The Natural Environments Branch of the Engineering Directorate at Marshall Space Flight Center (MSFC) provides solar cycle forecasts for NASA space flight programs and the aerospace community. These forecasts provide future statistical estimates of sunspot number, solar radio 10.7 cm flux (F10.7), and the geomagnetic planetary index, Ap, for input to various space environment models. For example, many thermosphere density computer models used in spacecraft operations, orbital lifetime analysis, and the planning of future spacecraft missions require as inputs the F10.7 and Ap. The solar forecast is updated each month by executing MSAFE using historical and the latest month's observed solar indices to provide estimates for the balance of the current solar cycle. The forecasted solar indices represent the 13-month smoothed values consisting of a best estimate value stated as a 50 percentile value along with approximate +/- 2 sigma values stated as 95 and 5 percentile statistical values. This presentation will give an overview of the MSAFE model and the forecast for the current solar cycle.

  9. Simple models for estimating dementia severity using machine learning.

    Science.gov (United States)

    Shankle, W R; Mania, S; Dick, M B; Pazzani, M J

    1998-01-01

    Estimating dementia severity using the Clinical Dementia Rating (CDR) Scale is a two-stage process that currently is costly and impractical in community settings, and at best has an interrater reliability of 80%. Because staging of dementia severity is economically and clinically important, we used Machine Learning (ML) algorithms with an Electronic Medical Record (EMR) to identify simpler models for estimating total CDR scores. Compared to a gold standard, which required 34 attributes to derive total CDR scores, ML algorithms identified models with as few as seven attributes. The classification accuracy varied with the algorithm used with naïve Bayes giving the highest. (76%) The mildly demented severity class was the only one with significantly reduced accuracy (59%). If one groups the severity classes into normal, very mild-to-mildly demented, and moderate-to-severely demented, then classification accuracies are clinically acceptable (85%). These simple models can be used in community settings where it is currently not possible to estimate dementia severity due to time and cost constraints.

  10. Simulation model accurately estimates total dietary iodine intake.

    Science.gov (United States)

    Verkaik-Kloosterman, Janneke; van 't Veer, Pieter; Ocké, Marga C

    2009-07-01

    One problem with estimating iodine intake is the lack of detailed data about the discretionary use of iodized kitchen salt and iodization of industrially processed foods. To be able to take into account these uncertainties in estimating iodine intake, a simulation model combining deterministic and probabilistic techniques was developed. Data from the Dutch National Food Consumption Survey (1997-1998) and an update of the Food Composition database were used to simulate 3 different scenarios: Dutch iodine legislation until July 2008, Dutch iodine legislation after July 2008, and a potential future situation. Results from studies measuring iodine excretion during the former legislation are comparable with the iodine intakes estimated with our model. For both former and current legislation, iodine intake was adequate for a large part of the Dutch population, but some young children (iodine levels, the percentage of the Dutch population with intakes that were too low increased (almost 10% of young children). To keep iodine intakes adequate, salt iodine levels should not be decreased, unless many more foods will contain iodized salt. Our model should be useful in predicting the effects of food reformulation or fortification on habitual nutrient intakes.

  11. Learning curve estimation in medical devices and procedures: hierarchical modeling.

    Science.gov (United States)

    Govindarajulu, Usha S; Stillo, Marco; Goldfarb, David; Matheny, Michael E; Resnic, Frederic S

    2017-07-30

    In the use of medical device procedures, learning effects have been shown to be a critical component of medical device safety surveillance. To support their estimation of these effects, we evaluated multiple methods for modeling these rates within a complex simulated dataset representing patients treated by physicians clustered within institutions. We employed unique modeling for the learning curves to incorporate the learning hierarchy between institution and physicians and then modeled them within established methods that work with hierarchical data such as generalized estimating equations (GEE) and generalized linear mixed effect models. We found that both methods performed well, but that the GEE may have some advantages over the generalized linear mixed effect models for ease of modeling and a substantially lower rate of model convergence failures. We then focused more on using GEE and performed a separate simulation to vary the shape of the learning curve as well as employed various smoothing methods to the plots. We concluded that while both hierarchical methods can be used with our mathematical modeling of the learning curve, the GEE tended to perform better across multiple simulated scenarios in order to accurately model the learning effect as a function of physician and hospital hierarchical data in the use of a novel medical device. We found that the choice of shape used to produce the 'learning-free' dataset would be dataset specific, while the choice of smoothing method was negligibly different from one another. This was an important application to understand how best to fit this unique learning curve function for hierarchical physician and hospital data. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  12. Estimation of the parameters of ETAS models by Simulated Annealing

    Science.gov (United States)

    Lombardi, Anna Maria

    2015-02-01

    This paper proposes a new algorithm to estimate the maximum likelihood parameters of an Epidemic Type Aftershock Sequences (ETAS) model. It is based on Simulated Annealing, a versatile method that solves problems of global optimization and ensures convergence to a global optimum. The procedure is tested on both simulated and real catalogs. The main conclusion is that the method performs poorly as the size of the catalog decreases because the effect of the correlation of the ETAS parameters is more significant. These results give new insights into the ETAS model and the efficiency of the maximum-likelihood method within this context.

  13. J-A Hysteresis Model Parameters Estimation using GA

    Directory of Open Access Journals (Sweden)

    Bogomir Zidaric

    2005-01-01

    Full Text Available This paper presents the Jiles and Atherton (J-A hysteresis model parameter estimation for soft magnetic composite (SMC material. The calculation of Jiles and Atherton hysteresis model parameters is based on experimental data and genetic algorithms (GA. Genetic algorithms operate in a given area of possible solutions. Finding the best solution of a problem in wide area of possible solutions is uncertain. A new approach in use of genetic algorithms is proposed to overcome this uncertainty. The basis of this approach is in genetic algorithm built in another genetic algorithm.

  14. Precise Asymptotics of Error Variance Estimator in Partially Linear Models

    Institute of Scientific and Technical Information of China (English)

    Shao-jun Guo; Min Chen; Feng Liu

    2008-01-01

    In this paper, we focus our attention on the precise asymptoties of error variance estimator in partially linear regression models, yi = xTi β + g(ti) +εi, 1 ≤i≤n, {εi,i = 1,... ,n } are i.i.d random errors with mean 0 and positive finite variance q2. Following the ideas of Allan Gut and Aurel Spataru[7,8] and Zhang[21],on precise asymptotics in the Baum-Katz and Davis laws of large numbers and precise rate in laws of the iterated logarithm, respectively, and subject to some regular conditions, we obtain the corresponding results in partially linear regression models.

  15. GARCH modelling of covariance in dynamical estimation of inverse solutions

    Energy Technology Data Exchange (ETDEWEB)

    Galka, Andreas [Institute of Experimental and Applied Physics, University of Kiel, 24098 Kiel (Germany) and Institute of Statistical Mathematics (ISM), Minami-Azabu 4-6-7, Tokyo 106-8569 (Japan)]. E-mail: galka@physik.uni-kiel.de; Yamashita, Okito [ATR Computational Neuroscience Laboratories, Hikaridai 2-2-2, Kyoto 619-0288 (Japan); Ozaki, Tohru [Institute of Statistical Mathematics (ISM), Minami-Azabu 4-6-7, Tokyo 106-8569 (Japan)

    2004-12-06

    The problem of estimating unobserved states of spatially extended dynamical systems poses an inverse problem, which can be solved approximately by a recently developed variant of Kalman filtering; in order to provide the model of the dynamics with more flexibility with respect to space and time, we suggest to combine the concept of GARCH modelling of covariance, well known in econometrics, with Kalman filtering. We formulate this algorithm for spatiotemporal systems governed by stochastic diffusion equations and demonstrate its feasibility by presenting a numerical simulation designed to imitate the situation of the generation of electroencephalographic recordings by the human cortex.

  16. MAXIMUM LIKELIHOOD ESTIMATION IN GENERALIZED GAMMA TYPE MODEL

    Directory of Open Access Journals (Sweden)

    Vinod Kumar

    2010-01-01

    Full Text Available In the present paper, the maximum likelihood estimates of the two parameters of ageneralized gamma type model have been obtained directly by solving the likelihood equationsas well as by reparametrizing the model first and then solving the likelihood equations (as doneby Prentice, 1974 for fixed values of the third parameter. It is found that reparametrization doesneither reduce the bulk nor the complexity of calculations. as claimed by Prentice (1974. Theprocedure has been illustrated with the help of an example. The distribution of MLE of q alongwith its properties has also been obtained.

  17. In-phase and quadrature imbalance modeling, estimation, and compensation

    CERN Document Server

    Li, Yabo

    2013-01-01

    This book provides a unified IQ imbalance model and systematically reviews the existing estimation and compensation schemes. It covers the different assumptions and approaches that lead to many models of IQ imbalance. In wireless communication systems, the In-phase and Quadrature (IQ) modulator and demodulator are usually used as transmitter (TX) and receiver (RX), respectively. For Digital-to-Analog Converter (DAC) and Analog-to-Digital Converter (ADC) limited systems, such as multi-giga-hertz bandwidth millimeter-wave systems, using analog modulator and demodulator is still a low power and l

  18. The Software Costs Estimation Based on UML Model

    Institute of Scientific and Technical Information of China (English)

    XiaopingYang; LuJun; YuefengZhao

    2004-01-01

    UML is a standard modeling language used in object-oriented analysis and design. Function point analysis is a method used to measure the size of an application, It is independent of the implementation programming language. Its measuring result can be compared between different development processes. This paper presents a method to use the requirements analysis model of UML to analysis the application's function points, so software developer can use it to estimate the project's size and cost. An improved method is given at the end of this paper.

  19. Naive Probability: Model-Based Estimates of Unique Events.

    Science.gov (United States)

    Khemlani, Sangeet S; Lotstein, Max; Johnson-Laird, Philip N

    2015-08-01

    We describe a dual-process theory of how individuals estimate the probabilities of unique events, such as Hillary Clinton becoming U.S. President. It postulates that uncertainty is a guide to improbability. In its computer implementation, an intuitive system 1 simulates evidence in mental models and forms analog non-numerical representations of the magnitude of degrees of belief. This system has minimal computational power and combines evidence using a small repertoire of primitive operations. It resolves the uncertainty of divergent evidence for single events, for conjunctions of events, and for inclusive disjunctions of events, by taking a primitive average of non-numerical probabilities. It computes conditional probabilities in a tractable way, treating the given event as evidence that may be relevant to the probability of the dependent event. A deliberative system 2 maps the resulting representations into numerical probabilities. With access to working memory, it carries out arithmetical operations in combining numerical estimates. Experiments corroborated the theory's predictions. Participants concurred in estimates of real possibilities. They violated the complete joint probability distribution in the predicted ways, when they made estimates about conjunctions: P(A), P(B), P(A and B), disjunctions: P(A), P(B), P(A or B or both), and conditional probabilities P(A), P(B), P(B|A). They were faster to estimate the probabilities of compound propositions when they had already estimated the probabilities of each of their components. We discuss the implications of these results for theories of probabilistic reasoning.

  20. Allometric models for estimating biomass and carbon in Alnus acuminata

    Directory of Open Access Journals (Sweden)

    William Fonseca

    2013-12-01

    Full Text Available In order to quantify the climate change mitigation potential of forest plantations, information on total biomass and its growth rate is required. Depending on the method used, the study of the biomass behavior can be a complex and expensive activity. The main objective of this research was to develop allometric models to estimate biomass for different tree components (leaves, branches, stem and root and total tree biomass in Alnus acuminata (Kunth in Costa Rica. Additionally, models were developed to estimate biomass and carbon in trees per hectare and for total plant biomass per hectare (trees + herbaceous vegetation + necromass. To construct the tree models, 41 sampling plots were evaluated in seven sites from which 47 trees with a diametric from 4.5 to 44.5 cm were selected to be harvested. In the selected models for the stem, root and total tree biomass, a r 2 >93.87 % was accomplished, while the r 2aj for leaves and branches was 88 %. For the biomass and carbon models for total trees and total plant biomass per hectare the r2 was >99 %. Average biomass expansion factor was 1.22 for aboveground and 1.43 for total biomass (when the root was included. The carbon fraction in plant biomass varied between 32.9 and 46.7 % and the percentage of soil carbon was 3 %.

  1. Bolasso: model consistent Lasso estimation through the bootstrap

    CERN Document Server

    Bach, Francis

    2008-01-01

    We consider the least-square linear regression problem with regularization by the l1-norm, a problem usually referred to as the Lasso. In this paper, we present a detailed asymptotic analysis of model consistency of the Lasso. For various decays of the regularization parameter, we compute asymptotic equivalents of the probability of correct model selection (i.e., variable selection). For a specific rate decay, we show that the Lasso selects all the variables that should enter the model with probability tending to one exponentially fast, while it selects all other variables with strictly positive probability. We show that this property implies that if we run the Lasso for several bootstrapped replications of a given sample, then intersecting the supports of the Lasso bootstrap estimates leads to consistent model selection. This novel variable selection algorithm, referred to as the Bolasso, is compared favorably to other linear regression methods on synthetic data and datasets from the UCI machine learning rep...

  2. Model-Consistent Sparse Estimation through the Bootstrap

    CERN Document Server

    Bach, Francis

    2009-01-01

    We consider the least-square linear regression problem with regularization by the $\\ell^1$-norm, a problem usually referred to as the Lasso. In this paper, we first present a detailed asymptotic analysis of model consistency of the Lasso in low-dimensional settings. For various decays of the regularization parameter, we compute asymptotic equivalents of the probability of correct model selection. For a specific rate decay, we show that the Lasso selects all the variables that should enter the model with probability tending to one exponentially fast, while it selects all other variables with strictly positive probability. We show that this property implies that if we run the Lasso for several bootstrapped replications of a given sample, then intersecting the supports of the Lasso bootstrap estimates leads to consistent model selection. This novel variable selection procedure, referred to as the Bolasso, is extended to high-dimensional settings by a provably consistent two-step procedure.

  3. Generalized linear model for estimation of missing daily rainfall data

    Science.gov (United States)

    Rahman, Nurul Aishah; Deni, Sayang Mohd; Ramli, Norazan Mohamed

    2017-04-01

    The analysis of rainfall data with no missingness is vital in various applications including climatological, hydrological and meteorological study. The issue of missing data is a serious concern since it could introduce bias and lead to misleading conclusions. In this study, five imputation methods including simple arithmetic average, normal ratio method, inverse distance weighting method, correlation coefficient weighting method and geographical coordinate were used to estimate the missing data. However, these imputation methods ignored the seasonality in rainfall dataset which could give more reliable estimation. Thus this study is aimed to estimate the missingness in daily rainfall data by using generalized linear model with gamma and Fourier series as the link function and smoothing technique, respectively. Forty years daily rainfall data for the period from 1975 until 2014 which consists of seven stations at Kelantan region were selected for the analysis. The findings indicated that the imputation methods could provide more accurate estimation values based on the least mean absolute error, root mean squared error and coefficient of variation root mean squared error when seasonality in the dataset are considered.

  4. Consistency in Estimation and Model Selection of Dynamic Panel Data Models with Fixed Effects

    Directory of Open Access Journals (Sweden)

    Guangjie Li

    2015-07-01

    Full Text Available We examine the relationship between consistent parameter estimation and model selection for autoregressive panel data models with fixed effects. We find that the transformation of fixed effects proposed by Lancaster (2002 does not necessarily lead to consistent estimation of common parameters when some true exogenous regressors are excluded. We propose a data dependent way to specify the prior of the autoregressive coefficient and argue for comparing different model specifications before parameter estimation. Model selection properties of Bayes factors and Bayesian information criterion (BIC are investigated. When model uncertainty is substantial, we recommend the use of Bayesian Model Averaging to obtain point estimators with lower root mean squared errors (RMSE. We also study the implications of different levels of inclusion probabilities by simulations.

  5. Impact of transport model errors on the global and regional methane emissions estimated by inverse modelling

    NARCIS (Netherlands)

    Locatelli, R.; Bousquet, P.; Chevallier, F.; Fortems-Cheney, A.; Szopa, S.; Saunois, M.; Agusti-Panareda, A.; Bergmann, D.; Bian, H.; Cameron-Smith, P.; Chipperfield, M.P.; Gloor, E.; Houweling, S.; Kawa, S.R.; Krol, M.C.; Patra, P.K.; Prinn, R.G.; Rigby, M.; Saito, R.; Wilson, C.

    2013-01-01

    A modelling experiment has been conceived to assess the impact of transport model errors on methane emissions estimated in an atmospheric inversion system. Synthetic methane observations, obtained from 10 different model outputs from the international TransCom-CH4 model inter-comparison exercise, ar

  6. Apply a hydrological model to estimate local temperature trends

    Science.gov (United States)

    Igarashi, Masao; Shinozawa, Tatsuya

    2014-03-01

    Continuous times series {f(x)} such as a depth of water is written f(x) = T(x)+P(x)+S(x)+C(x) in hydrological science where T(x),P(x),S(x) and C(x) are called the trend, periodic, stochastic and catastrophic components respectively. We simplify this model and apply it to the local temperature data such as given E. Halley (1693), the UK (1853-2010), Germany (1880-2010), Japan (1876-2010). We also apply the model to CO2 data. The model coefficients are evaluated by a symbolic computation by using a standard personal computer. The accuracy of obtained nonlinear curve is evaluated by the arithmetic mean of relative errors between the data and estimations. E. Halley estimated the temperature of Gresham College from 11/1692 to 11/1693. The simplified model shows that the temperature at the time rather cold compared with the recent of London. The UK and Germany data sets show that the maximum and minimum temperatures increased slowly from the 1890s to 1940s, increased rapidly from the 1940s to 1980s and have been decreasing since the 1980s with the exception of a few local stations. The trend of Japan is similar to these results.

  7. The complex model of risk and progression of AMD estimation

    Directory of Open Access Journals (Sweden)

    V. S. Akopyan

    2012-01-01

    Full Text Available Purpose: to develop a method and a statistical model to estimate individual risk of AMD and the risk for progression to advanced AMD using clinical and genetic risk factors.Methods: A statistical risk assessment model was developed using stepwise binary logistic regression analysis. to estimate the population differences in the prevalence of allelic variants of genes and for the development of models adapted to the population of Moscow region genotyping and assessment of the influence of other risk factors was performed in two groups: patients with differ- ent stages of AMD (n = 74, and control group (n = 116. Genetic risk factors included in the study: polymorphisms in the complement system genes (C3 and CFH, genes at 10q26 locus (ARMS2 and HtRA1, polymorphism in the mitochondrial gene Mt-ND2. Clinical risk factors included in the study: age, gender, high body mass index, smoking history.Results: A comprehensive analysis of genetic and clinical risk factors for AMD in the study group was performed. Compiled statis- tical model assessment of individual risk of AMD, the sensitivity of the model — 66.7%, specificity — 78.5%, AUC = 0.76. Risk factors of late AMD, compiled a statistical model describing the probability of late AMD, the sensitivity of the model — 66.7%, specificity — 78.3%, AUC = 0.73. the developed system allows determining the most likely version of the current late AMD: dry or wet.Conclusion: the developed test system and the mathematical algorhythm for determining the risk of AMD, risk of progression to advanced AMD have fair diagnostic informative and promising for use in clinical practice.

  8. The complex model of risk and progression of AMD estimation

    Directory of Open Access Journals (Sweden)

    V. S. Akopyan

    2014-07-01

    Full Text Available Purpose: to develop a method and a statistical model to estimate individual risk of AMD and the risk for progression to advanced AMD using clinical and genetic risk factors.Methods: A statistical risk assessment model was developed using stepwise binary logistic regression analysis. to estimate the population differences in the prevalence of allelic variants of genes and for the development of models adapted to the population of Moscow region genotyping and assessment of the influence of other risk factors was performed in two groups: patients with differ- ent stages of AMD (n = 74, and control group (n = 116. Genetic risk factors included in the study: polymorphisms in the complement system genes (C3 and CFH, genes at 10q26 locus (ARMS2 and HtRA1, polymorphism in the mitochondrial gene Mt-ND2. Clinical risk factors included in the study: age, gender, high body mass index, smoking history.Results: A comprehensive analysis of genetic and clinical risk factors for AMD in the study group was performed. Compiled statis- tical model assessment of individual risk of AMD, the sensitivity of the model — 66.7%, specificity — 78.5%, AUC = 0.76. Risk factors of late AMD, compiled a statistical model describing the probability of late AMD, the sensitivity of the model — 66.7%, specificity — 78.3%, AUC = 0.73. the developed system allows determining the most likely version of the current late AMD: dry or wet.Conclusion: the developed test system and the mathematical algorhythm for determining the risk of AMD, risk of progression to advanced AMD have fair diagnostic informative and promising for use in clinical practice.

  9. Structure Refinement for Vulnerability Estimation Models using Genetic Algorithm Based Model Generators

    Directory of Open Access Journals (Sweden)

    2009-01-01

    Full Text Available In this paper, a method for model structure refinement is proposed and applied in estimation of cumulative number of vulnerabilities according to time. Security as a quality characteristic is presented and defined. Vulnerabilities are defined and their importance is assessed. Existing models used for number of vulnerabilities estimation are enumerated, inspecting their structure. The principles of genetic model generators are inspected. Model structure refinement is defined in comparison with model refinement and a method for model structure refinement is proposed. A case study shows how the method is applied and the obtained results.

  10. A software for parameter estimation in dynamic models

    Directory of Open Access Journals (Sweden)

    M. Yuceer

    2008-12-01

    Full Text Available A common problem in dynamic systems is to determine parameters in an equation used to represent experimental data. The goal is to determine the values of model parameters that provide the best fit to measured data, generally based on some type of least squares or maximum likelihood criterion. In the most general case, this requires the solution of a nonlinear and frequently non-convex optimization problem. Some of the available software lack in generality, while others do not provide ease of use. A user-interactive parameter estimation software was needed for identifying kinetic parameters. In this work we developed an integration based optimization approach to provide a solution to such problems. For easy implementation of the technique, a parameter estimation software (PARES has been developed in MATLAB environment. When tested with extensive example problems from literature, the suggested approach is proven to provide good agreement between predicted and observed data within relatively less computing time and iterations.

  11. Sea State Estimation Using Model-scale DP Measurements

    DEFF Research Database (Denmark)

    H. Brodtkorb, Astrid; Nielsen, Ulrik D.; J. Sørensen, Asgeir

    2015-01-01

    Complex marine operations are moving further from shore, into deeper waters, and harsher environments. The operating hours of a vessel are weather dependent, and good knowledge of the prevailing weather conditions may ensure cost-efficient and safe operations. This paper considers the estimation...... of the peak wave frequency of the on-site sea state based on the vessel’s motion in waves. A sea state can be described by significant wave height, peak wave frequency, wave direction, and often wind speed and direction are added as well. The signal-based algorithm presented in this paper is based on Fourier...... transforms of the vessel response in heave, roll and pitch. The measurements are used directly to obtain an estimate of the peak frequency of the waves. Experimental results from model-scale offshore ship runs at the Marine Cybernetics Laboratory (MCLab) at NTNU demonstrate the performance of the proposed...

  12. Integrated traffic conflict model for estimating crash modification factors.

    Science.gov (United States)

    Shahdah, Usama; Saccomanno, Frank; Persaud, Bhagwant

    2014-10-01

    Crash modification factors (CMFs) for road safety treatments are usually obtained through observational models based on reported crashes. Observational Bayesian before-and-after methods have been applied to obtain more precise estimates of CMFs by accounting for the regression-to-the-mean bias inherent in naive methods. However, sufficient crash data reported over an extended period of time are needed to provide reliable estimates of treatment effects, a requirement that can be a challenge for certain types of treatment. In addition, these studies require that sites analyzed actually receive the treatment to which the CMF pertains. Another key issue with observational approaches is that they are not causal in nature, and as such, cannot provide a sound "behavioral" rationale for the treatment effect. Surrogate safety measures based on high risk vehicle interactions and traffic conflicts have been proposed to address this issue by providing a more "causal perspective" on lack of safety for different road and traffic conditions. The traffic conflict approach has been criticized, however, for lacking a formal link to observed and verified crashes, a difficulty that this paper attempts to resolve by presenting and investigating an alternative approach for estimating CMFs using simulated conflicts that are linked formally to observed crashes. The integrated CMF estimates are compared to estimates from an empirical Bayes (EB) crash-based before-and-after analysis for the same sample of treatment sites. The treatment considered involves changing left turn signal priority at Toronto signalized intersections from permissive to protected-permissive. The results are promising in that the proposed integrated method yields CMFs that closely match those obtained from the crash-based EB before-and-after analysis. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Simplified Atmospheric Dispersion Model andModel Based Real Field Estimation System ofAir Pollution

    Institute of Scientific and Technical Information of China (English)

    2015-01-01

    The atmospheric dispersion model has been well developed and applied in pollution emergency and prediction. Based on thesophisticated air diffusion model, this paper proposes a simplified model and some optimization about meteorological andgeological conditions. The model is suitable for what is proposed as Real Field Monitor and Estimation system. The principle ofsimplified diffusion model and its optimization is studied. The design of Real Field Monitor system based on this model and itsfundamental implementations are introduced.

  14. On the Relationships between Jeffreys Modal and Weighted Likelihood Estimation of Ability under Logistic IRT Models

    Science.gov (United States)

    Magis, David; Raiche, Gilles

    2012-01-01

    This paper focuses on two estimators of ability with logistic item response theory models: the Bayesian modal (BM) estimator and the weighted likelihood (WL) estimator. For the BM estimator, Jeffreys' prior distribution is considered, and the corresponding estimator is referred to as the Jeffreys modal (JM) estimator. It is established that under…

  15. Moderate Deviations for M-estimators in Linear Models with φ-mixing Errors

    Institute of Scientific and Technical Information of China (English)

    Jun FAN

    2012-01-01

    In this paper,the moderate deviations for the M-estimators of regression parameter in a linear model are obtained when the errors form a strictly stationary φ-mixing sequence.The results are applied to study many different types of M-estimators such as Huber's estimator,Lp-regression estimator,least squares estimator and least absolute deviation estimator.

  16. Modeling the uncertainty of estimating forest carbon stocks in China

    Directory of Open Access Journals (Sweden)

    T. X. Yue

    2015-12-01

    Full Text Available Earth surface systems are controlled by a combination of global and local factors, which cannot be understood without accounting for both the local and global components. The system dynamics cannot be recovered from the global or local controls alone. Ground forest inventory is able to accurately estimate forest carbon stocks at sample plots, but these sample plots are too sparse to support the spatial simulation of carbon stocks with required accuracy. Satellite observation is an important source of global information for the simulation of carbon stocks. Satellite remote-sensing can supply spatially continuous information about the surface of forest carbon stocks, which is impossible from ground-based investigations, but their description has considerable uncertainty. In this paper, we validated the Lund-Potsdam-Jena dynamic global vegetation model (LPJ, the Kriging method for spatial interpolation of ground sample plots and a satellite-observation-based approach as well as an approach for fusing the ground sample plots with satellite observations and an assimilation method for incorporating the ground sample plots into LPJ. The validation results indicated that both the data fusion and data assimilation approaches reduced the uncertainty of estimating carbon stocks. The data fusion had the lowest uncertainty by using an existing method for high accuracy surface modeling to fuse the ground sample plots with the satellite observations (HASM-SOA. The estimates produced with HASM-SOA were 26.1 and 28.4 % more accurate than the satellite-based approach and spatial interpolation of the sample plots, respectively. Forest carbon stocks of 7.08 Pg were estimated for China during the period from 2004 to 2008, an increase of 2.24 Pg from 1984 to 2008, using the preferred HASM-SOA method.

  17. Estimation of Schiff moments using the nuclear shell model

    Science.gov (United States)

    Teruya, Eri; Yoshinaga, Naotaka; Arai, Ryoichi; Higashiyama, Koji

    2014-09-01

    The existence of finite permanent electric dipole moment (EDM) of an elementary particle or an atom indicates violation of time-reversal symmetry. The time reversal invariance implies violation of charge and parity symmetry through the CPT theorem. The predicted fundamental particle's EDMs are too small to be observed in the Standard Model. However, some models beyond the Standard Model produce much larger EDMs which may be observed in future. Thus, if we observe finite EDMs, we can conclude that we need a new extended model for the Standard Model and the specific value of an EDM gives a constraint on constructing a new model. Experimental efforts searching for atomic EDMs are now in progress. The EDM of a neutral atom is mainly induced by the nuclear Schiff moment, since the electron EDM is very small and the nuclear EDM is shielded by outside electrons owing to the Schiff theorem. In this work we estimate the Schiff moments for the lowest 1/2+ states of Xe isotopes around the mass 130. The nuclear wave functions beyond mean-field theories are calculated in terms of the nuclear shell model. We discuss influences of core excitations and over shell excitations on the Schiff moments.

  18. Error estimates for the Skyrme-Hartree-Fock model

    CERN Document Server

    Erler, J

    2014-01-01

    There are many complementing strategies to estimate the extrapolation errors of a model which was calibrated in least-squares fits. We consider the Skyrme-Hartree-Fock model for nuclear structure and dynamics and exemplify the following five strategies: uncertainties from statistical analysis, covariances between observables, trends of residuals, variation of fit data, dedicated variation of model parameters. This gives useful insight into the impact of the key fit data as they are: binding energies, charge r.m.s. radii, and charge formfactor. Amongst others, we check in particular the predictive value for observables in the stable nucleus $^{208}$Pb, the super-heavy element $^{266}$Hs, $r$-process nuclei, and neutron stars.

  19. Estimation in the polynomial errors-in-variables model

    Institute of Scientific and Technical Information of China (English)

    ZHANG; Sanguo

    2002-01-01

    [1]Kendall, M. G., Stuart, A., The Advanced Theory of Statistics, Vol. 2, New York: Charles Griffin, 1979.[2]Fuller, W. A., Measurement Error Models, New York: Wiley, 1987.[3]Carroll, R. J., Ruppert D., Stefanski, L. A., Measurement Error in Nonlinear Models, London: Chapman & Hall, 1995.[4]Stout, W. F., Almost Sure Convergence, New York: Academic Press, 1974,154.[5]Petrov, V. V., Sums of Independent Random Variables, New York: Springer-Verlag, 1975, 272.[6]Zhang, S. G., Chen, X. R., Consistency of modified MLE in EV model with replicated observation, Science in China, Ser. A, 2001, 44(3): 304-310.[7]Lai, T. L., Robbins, H., Wei, C. Z., Strong consistency of least squares estimates in multiple regression, J. Multivariate Anal., 1979, 9: 343-362.

  20. Dynamic systems models new methods of parameter and state estimation

    CERN Document Server

    2016-01-01

    This monograph is an exposition of a novel method for solving inverse problems, a method of parameter estimation for time series data collected from simulations of real experiments. These time series might be generated by measuring the dynamics of aircraft in flight, by the function of a hidden Markov model used in bioinformatics or speech recognition or when analyzing the dynamics of asset pricing provided by the nonlinear models of financial mathematics. Dynamic Systems Models demonstrates the use of algorithms based on polynomial approximation which have weaker requirements than already-popular iterative methods. Specifically, they do not require a first approximation of a root vector and they allow non-differentiable elements in the vector functions being approximated. The text covers all the points necessary for the understanding and use of polynomial approximation from the mathematical fundamentals, through algorithm development to the application of the method in, for instance, aeroplane flight dynamic...

  1. Irrigation Requirement Estimation Using Vegetation Indices and Inverse Biophysical Modeling

    Science.gov (United States)

    Bounoua, Lahouari; Imhoff, Marc L.; Franks, Shannon

    2010-01-01

    We explore an inverse biophysical modeling process forced by satellite and climatological data to quantify irrigation requirements in semi-arid agricultural areas. We constrain the carbon and water cycles modeled under both equilibrium, balance between vegetation and climate, and non-equilibrium, water added through irrigation. We postulate that the degree to which irrigated dry lands vary from equilibrium climate conditions is related to the amount of irrigation. The amount of water required over and above precipitation is considered as an irrigation requirement. For July, results show that spray irrigation resulted in an additional amount of water of 1.3 mm per occurrence with a frequency of 24.6 hours. In contrast, the drip irrigation required only 0.6 mm every 45.6 hours or 46% of that simulated by the spray irrigation. The modeled estimates account for 87% of the total reported irrigation water use, when soil salinity is not important and 66% in saline lands.

  2. Model approach for estimating potato pesticide bioconcentration factor.

    Science.gov (United States)

    Paraíba, Lourival Costa; Kataguiri, Karen

    2008-11-01

    We presented a model that estimates the bioconcentration factor (BCF) of pesticides in potatoes supposing that the pesticide in the soil solution is absorbed by the potato by passive diffusion, following Fick's second law. The pesticides in the model are nonionic organic substances, traditionally used in potato crops that degrade in the soil according to a first-order kinetic equation. This presents an expression that relates BCF with the pesticide elimination rate by the potato, with the pesticide accumulation rate within the potato, with the rate of growth of the potato and with the pesticide degradation rate in the soil. BCF was estimated supposing steady state equilibrium of the quotient between the pesticide concentration in the potato and the pesticide concentration in the soil solution. It is suggested that a negative correlation exists between the pesticide BCF and the soil sorption partition coefficient. The model was built based on the work of Trapp et al. [Trapp, S., Cammarano, A., Capri, E., Reichenberg, F., Mayer, P., 2007. Diffusion of PAH in potato and carrot slices and application for a potato model. Environ. Sci. Technol. 41 (9), 3103-3108], in which an expression to calculate the diffusivity of persistent organic substances in potatoes is presented. The model consists in adding to the expression of Trapp et al. [Trapp, S., Cammarano, A., Capri, E., Reichenberg, F., Mayer, P., 2007. Diffusion of PAH in potato and carrot slices and application for a potato model. Environ. Sci. Technol. 41 (9), 3103-3108] the hypothesis that the pesticide degrades in the soil. The value of BCF suggests which pesticides should be monitored in potatoes.

  3. Modeling And Simulation of Speed and flux Estimator Based on Current & voltage Model

    Directory of Open Access Journals (Sweden)

    Dinesh Chandra Jain

    2011-10-01

    Full Text Available This paper introduce a estimator based on and current & voltage model used in induction motor (IM drive. The rotor speed estimation is based on the model reference adaptive system (MRAS approach. The closed loop control mechanism is based on the voltage and current model. The control and estimation algorithms utilize the synchronous coordinates as a frame of reference. A speed sensor less induction motor (IM drive with Robust control characteristics is introduced. First, a speed observation system, which is insensitive to the variations of motor parameters.

  4. Estimating a marriage matching model with spillover effects.

    Science.gov (United States)

    Choo, Eugene; Siow, Aloysius

    2006-08-01

    We use marriage matching functions to study how marital patterns change when population supplies change. Specifically, we use a behavioral marriage matching function with spillover effects to rationalize marriage and cohabitation behavior in contemporary Canada. The model can estimate a couple's systematic gains to marriage and cohabitation relative to remaining single. These gains are invariant to changes in population supplies. Instead, changes in population supplies redistribute these gains between a couple. Although the model is behavioral, it is nonparametric. It can fit any observed cross-sectional marriage matching distribution. We use the estimated model to quantify the impacts of gender differences in mortality rates and the baby boom on observed marital behavior in Canada. The higher mortality rate of men makes men scarcer than women. We show that the scarceness of men modestly reduced the welfare of women and increased the welfare of men in the marriage market. On the other hand, the baby boom increased older men's net gains to entering the marriage market and lowered middle-aged women's net gains.

  5. Micro, nanosystems and systems on chips modeling, control, and estimation

    CERN Document Server

    Voda, Alina

    2013-01-01

    Micro and nanosystems represent a major scientific and technological challenge, with actual and potential applications in almost all fields of the human activity. The aim of the present book is to present how concepts from dynamical control systems (modeling, estimation, observation, identification, feedback control) can be adapted and applied to the development of original very small-scale systems and of their human interfaces. The application fields presented here come from micro and nanorobotics, biochips, near-field microscopy (AFM and STM) and nanosystems networks. Alina Voda has drawn co

  6. Bootstrapping Nonlinear Least Squares Estimates in the Kalman Filter Model.

    Science.gov (United States)

    1986-01-01

    Bias Bootstrapa 3.933 x 103 0.651 x 103 -0.166 x 10-- b b Newton - Rapshon 1.380 x 10- 0.479 x 10- 10_c 0_ c , e -.., Emperical 3.605 x 10 -0.026 x 10...most cases, parameter estimation for the KF model has been accomplished by maximum likelihood techniques involving the use of scoring or Newton ...is well behaved, the Newton -Raphson and scoring procedures enjoy quadratic convergence in the neighborhood of the maximum and one has a ready-made

  7. Estimating Population Abundance Using Sightability Models: R SightabilityModel Package

    Directory of Open Access Journals (Sweden)

    John R. Fieberg

    2012-11-01

    Full Text Available Sightability models are binary logistic-regression models used to estimate and adjust for visibility bias in wildlife-population surveys (Steinhorst and Samuel 1989. Estimation proceeds in 2 stages: (1 Sightability trials are conducted with marked individuals, and logistic regression is used to estimate the probability of detection as a function of available covariates (e.g., visual obstruction, group size. (2 The fitted model is used to adjust counts (from future surveys for animals that were not observed. A modified Horvitz-Thompson estimator is used to estimate abundance: counts of observed animal groups are divided by their inclusion probabilites (determined by plot-level sampling probabilities and the detection probabilities estimated from stage 1. We provide a brief historical account of the approach, clarifying and documenting suggested modifications to the variance estimators originally proposed by Steinhorst and Samuel (1989. We then introduce a new R package, SightabilityModel, for estimating abundance using this technique. Lastly, we illustrate the software with a series of examples using data collected from moose (Alces alces in northeastern Minnesota and mountain goats (Oreamnos americanus in Washington State.

  8. Small Area Model-Based Estimators Using Big Data Sources

    Directory of Open Access Journals (Sweden)

    Marchetti Stefano

    2015-06-01

    Full Text Available The timely, accurate monitoring of social indicators, such as poverty or inequality, on a finegrained spatial and temporal scale is a crucial tool for understanding social phenomena and policymaking, but poses a great challenge to official statistics. This article argues that an interdisciplinary approach, combining the body of statistical research in small area estimation with the body of research in social data mining based on Big Data, can provide novel means to tackle this problem successfully. Big Data derived from the digital crumbs that humans leave behind in their daily activities are in fact providing ever more accurate proxies of social life. Social data mining from these data, coupled with advanced model-based techniques for fine-grained estimates, have the potential to provide a novel microscope through which to view and understand social complexity. This article suggests three ways to use Big Data together with small area estimation techniques, and shows how Big Data has the potential to mirror aspects of well-being and other socioeconomic phenomena.

  9. Empirical Bayes Estimation in the Rasch Model: A Simulation.

    Science.gov (United States)

    de Gruijter, Dato N. M.

    In a situation where the population distribution of latent trait scores can be estimated, the ordinary maximum likelihood estimator of latent trait scores may be improved upon by taking the estimated population distribution into account. In this paper empirical Bayes estimators are compared with the liklihood estimator for three samples of 300…

  10. Use of models in large-area forest surveys: comparing model-assisted, model-based and hybrid estimation

    Directory of Open Access Journals (Sweden)

    Göran Ståhl

    2016-02-01

    Full Text Available This paper focuses on the use of models for increasing the precision of estimators in large-area forest surveys. It is motivated by the increasing availability of remotely sensed data, which facilitates the development of models predicting the variables of interest in forest surveys. We present, review and compare three different estimation frameworks where models play a core role: model-assisted, model-based, and hybrid estimation. The first two are well known, whereas the third has only recently been introduced in forest surveys. Hybrid inference mixes designbased and model-based inference, since it relies on a probability sample of auxiliary data and a model predicting the target variable from the auxiliary data..We review studies on large-area forest surveys based on model-assisted, modelbased, and hybrid estimation, and discuss advantages and disadvantages of the approaches. We conclude that no general recommendations can be made about whether model-assisted, model-based, or hybrid estimation should be preferred. The choice depends on the objective of the survey and the possibilities to acquire appropriate field and remotely sensed data. We also conclude that modelling approaches can only be successfully applied for estimating target variables such as growing stock volume or biomass, which are adequately related to commonly available remotely sensed data, and thus purely field based surveys remain important for several important forest parameters. Keywords: Design-based inference, Model-assisted estimation, Model-based inference, Hybrid inference, National forest inventory, Remote sensing, Sampling

  11. Cointegration between trends and their estimators in state space models and CVAR models

    DEFF Research Database (Denmark)

    Johansen, Søren; Tabor, Morten Nyboe

    2017-01-01

    In a linear state space model Y(t)=BT(t) e(t), we investigate if the unobserved trend, T(t), cointegrates with the predicted trend, E(t), and with the estimated predicted trend, in the sense that the spreads are stationary. We find that this result holds for the spread B......(T(t)-E(t)) and the estimated spread. For the spread between the trend and the estimated trend, T(t)-E(t), however, cointegration depends on the identification of B. The same results are found, if the observations Y(t), from the state space model are analysed using a cointegrated vector autoregressive model, where the trend...... is defined as the common trend. Finally, we investigate cointegration between the spread between trends and their estimators based on the two models, and find the same results. We illustrate with two examples and confirm the results by a small simulation study....

  12. Similarity-based semi-local estimation of EMOS models

    CERN Document Server

    Lerch, Sebastian

    2015-01-01

    Weather forecasts are typically given in the form of forecast ensembles obtained from multiple runs of numerical weather prediction models with varying initial conditions and physics parameterizations. Such ensemble predictions tend to be biased and underdispersive and thus require statistical postprocessing. In the ensemble model output statistics (EMOS) approach, a probabilistic forecast is given by a single parametric distribution with parameters depending on the ensemble members. This article proposes two semi-local methods for estimating the EMOS coefficients where the training data for a specific observation station are augmented with corresponding forecast cases from stations with similar characteristics. Similarities between stations are determined using either distance functions or clustering based on various features of the climatology, forecast errors, ensemble predictions and locations of the observation stations. In a case study on wind speed over Europe with forecasts from the Grand Limited Area...

  13. Static models, recursive estimators and the zero-variance approach

    KAUST Repository

    Rubino, Gerardo

    2016-01-07

    When evaluating dependability aspects of complex systems, most models belong to the static world, where time is not an explicit variable. These models suffer from the same problems than dynamic ones (stochastic processes), such as the frequent combinatorial explosion of the state spaces. In the Monte Carlo domain, on of the most significant difficulties is the rare event situation. In this talk, we describe this context and a recent technique that appears to be at the top performance level in the area, where we combined ideas that lead to very fast estimation procedures with another approach called zero-variance approximation. Both ideas produced a very efficient method that has the right theoretical property concerning robustness, the Bounded Relative Error one. Some examples illustrate the results.

  14. ESTIMATION FOR THE ASYMPTOTIC BEHAVIOR OF THE DELAYED COMPETITION MODEL

    Institute of Scientific and Technical Information of China (English)

    Li Huifeng; Wang Jinliang

    2008-01-01

    In ecological dynamic systems, the competition between species is a very universal phenomenon, which can be described by the well-known Volterra-Lotka model in a diffusion form. Noticing that the living space usually changes in a seasonal manner and the population development of the species may also undergo time-delay im-pact, a developed form of this model is investigated in this article. The main approaches employed here are the upper-lower solution method and the energy-estimate technique. The results show that whether the species may sustain survival or not depends on the relations among the birth rate, the death rate, the competition rate, the diffusivity and the time delay. For the survival case, the population evolutions of the two species may appear asymptotic periodicity with distinct upper bound and this bound depends heavily on the time delay. These results can be also checked by the intuitionistic numerical simulations.

  15. Coastal groundwater table estimation by an elevation fluctuation neural model

    Institute of Scientific and Technical Information of China (English)

    HE Bin; WANG Yi

    2007-01-01

    Restrictions of groundwater management are often derived from the insufficient or missing groundwater database. A suitable and complete groundwater database will allow sound engineering plans for sustainable water usage, including the drilling of wells, rates of water withdrawal, and eventually artificial recharge of the aquifer. The spatial-temporal variations of groundwater monitoring data are fluently influenced by the presence of manual factors, monitor equipment malfunctioning, natural phenomena, etc. Thus, it is necessary for researchers to check and infill the groundwater database before running the numerical groundwater model. In this paper, an artificial neural network (ANN)-based model is formulated using the hydrological and meteorological data to infill the inadequate data in the groundwater database. Prediction results present that ANN method could be a desirable choice for estimating the missing groundwater data.

  16. Tracer kinetic modelling in MRI: estimating perfusion and capillary permeability

    Science.gov (United States)

    Sourbron, S. P.; Buckley, D. L.

    2012-01-01

    The tracer-kinetic models developed in the early 1990s for dynamic contrast-enhanced MRI (DCE-MRI) have since become a standard in numerous applications. At the same time, the development of MRI hardware has led to increases in image quality and temporal resolution that reveal the limitations of the early models. This in turn has stimulated an interest in the development and application of a second generation of modelling approaches. They are designed to overcome these limitations and produce additional and more accurate information on tissue status. In particular, models of the second generation enable separate estimates of perfusion and capillary permeability rather than a single parameter Ktrans that represents a combination of the two. A variety of such models has been proposed in the literature, and development in the field has been constrained by a lack of transparency regarding terminology, notations and physiological assumptions. In this review, we provide an overview of these models in a manner that is both physically intuitive and mathematically rigourous. All are derived from common first principles, using concepts and notations from general tracer-kinetic theory. Explicit links to their historical origins are included to allow for a transfer of experience obtained in other fields (PET, SPECT, CT). A classification is presented that reveals the links between all models, and with the models of the first generation. Detailed formulae for all solutions are provided to facilitate implementation. Our aim is to encourage the application of these tools to DCE-MRI by offering researchers a clearer understanding of their assumptions and requirements.

  17. Principles of parametric estimation in modeling language competition.

    Science.gov (United States)

    Zhang, Menghan; Gong, Tao

    2013-06-11

    It is generally difficult to define reasonable parameters and interpret their values in mathematical models of social phenomena. Rather than directly fitting abstract parameters against empirical data, we should define some concrete parameters to denote the sociocultural factors relevant for particular phenomena, and compute the values of these parameters based upon the corresponding empirical data. Taking the example of modeling studies of language competition, we propose a language diffusion principle and two language inheritance principles to compute two critical parameters, namely the impacts and inheritance rates of competing languages, in our language competition model derived from the Lotka-Volterra competition model in evolutionary biology. These principles assign explicit sociolinguistic meanings to those parameters and calculate their values from the relevant data of population censuses and language surveys. Using four examples of language competition, we illustrate that our language competition model with thus-estimated parameter values can reliably replicate and predict the dynamics of language competition, and it is especially useful in cases lacking direct competition data.

  18. Model calibration criteria for estimating ecological flow characteristics

    Science.gov (United States)

    Vis, Marc; Knight, Rodney; Poole, Sandra; Wolfe, William; Seibert, Jan; Breuer, Lutz; Kraft, Philipp

    2016-01-01

    Quantification of streamflow characteristics in ungauged catchments remains a challenge. Hydrological modeling is often used to derive flow time series and to calculate streamflow characteristics for subsequent applications that may differ from those envisioned by the modelers. While the estimation of model parameters for ungauged catchments is a challenging research task in itself, it is important to evaluate whether simulated time series preserve critical aspects of the streamflow hydrograph. To address this question, seven calibration objective functions were evaluated for their ability to preserve ecologically relevant streamflow characteristics of the average annual hydrograph using a runoff model, HBV-light, at 27 catchments in the southeastern United States. Calibration trials were repeated 100 times to reduce parameter uncertainty effects on the results, and 12 ecological flow characteristics were computed for comparison. Our results showed that the most suitable calibration strategy varied according to streamflow characteristic. Combined objective functions generally gave the best results, though a clear underprediction bias was observed. The occurrence of low prediction errors for certain combinations of objective function and flow characteristic suggests that (1) incorporating multiple ecological flow characteristics into a single objective function would increase model accuracy, potentially benefitting decision-making processes; and (2) there may be a need to have different objective functions available to address specific applications of the predicted time series.

  19. Quantitative Model for Estimating Soil Erosion Rates Using 137Cs

    Institute of Scientific and Technical Information of China (English)

    YANGHAO; GHANGQING; 等

    1998-01-01

    A quantitative model was developed to relate the amount of 137Cs loss from the soil profile to the rate of soil erosion,According th mass balance model,the depth distribution pattern of 137Cs in the soil profile ,the radioactive decay of 137Cs,sampling year and the difference of 137Cs fallout amount among years were taken into consideration.By introducing typical depth distribution functions of 137Cs into the model ,detailed equations for the model were got for different soil,The model shows that the rate of soil erosion is mainly controlled by the depth distrbution pattern of 137Cs ,the year of sampling,and the percentage reduction in total 137Cs,The relationship between the rate of soil loss and 137Cs depletion i neither linear nor logarithmic,The depth distribution pattern of 137Cs is a major factor for estimating the rate of soil loss,Soil erosion rate is directly related with the fraction of 137Cs content near the soil surface. The influences of the radioactive decay of 137Cs,sampling year and 137Cs input fraction are not large compared with others.

  20. Impact of transport model errors on the global and regional methane emissions estimated by inverse modelling

    Directory of Open Access Journals (Sweden)

    R. Locatelli

    2013-10-01

    Full Text Available A modelling experiment has been conceived to assess the impact of transport model errors on methane emissions estimated in an atmospheric inversion system. Synthetic methane observations, obtained from 10 different model outputs from the international TransCom-CH4 model inter-comparison exercise, are combined with a prior scenario of methane emissions and sinks, and integrated into the three-component PYVAR-LMDZ-SACS (PYthon VARiational-Laboratoire de Météorologie Dynamique model with Zooming capability-Simplified Atmospheric Chemistry System inversion system to produce 10 different methane emission estimates at the global scale for the year 2005. The same methane sinks, emissions and initial conditions have been applied to produce the 10 synthetic observation datasets. The same inversion set-up (statistical errors, prior emissions, inverse procedure is then applied to derive flux estimates by inverse modelling. Consequently, only differences in the modelling of atmospheric transport may cause differences in the estimated fluxes. In our framework, we show that transport model errors lead to a discrepancy of 27 Tg yr−1 at the global scale, representing 5% of total methane emissions. At continental and annual scales, transport model errors are proportionally larger than at the global scale, with errors ranging from 36 Tg yr−1 in North America to 7 Tg yr−1 in Boreal Eurasia (from 23 to 48%, respectively. At the model grid-scale, the spread of inverse estimates can reach 150% of the prior flux. Therefore, transport model errors contribute significantly to overall uncertainties in emission estimates by inverse modelling, especially when small spatial scales are examined. Sensitivity tests have been carried out to estimate the impact of the measurement network and the advantage of higher horizontal resolution in transport models. The large differences found between methane flux estimates inferred in these different configurations highly

  1. Modeling reactive transport with particle tracking and kernel estimators

    Science.gov (United States)

    Rahbaralam, Maryam; Fernandez-Garcia, Daniel; Sanchez-Vila, Xavier

    2015-04-01

    Groundwater reactive transport models are useful to assess and quantify the fate and transport of contaminants in subsurface media and are an essential tool for the analysis of coupled physical, chemical, and biological processes in Earth Systems. Particle Tracking Method (PTM) provides a computationally efficient and adaptable approach to solve the solute transport partial differential equation. On a molecular level, chemical reactions are the result of collisions, combinations, and/or decay of different species. For a well-mixed system, the chem- ical reactions are controlled by the classical thermodynamic rate coefficient. Each of these actions occurs with some probability that is a function of solute concentrations. PTM is based on considering that each particle actually represents a group of molecules. To properly simulate this system, an infinite number of particles is required, which is computationally unfeasible. On the other hand, a finite number of particles lead to a poor-mixed system which is limited by diffusion. Recent works have used this effect to actually model incomplete mix- ing in naturally occurring porous media. In this work, we demonstrate that this effect in most cases should be attributed to a defficient estimation of the concentrations and not to the occurrence of true incomplete mixing processes in porous media. To illustrate this, we show that a Kernel Density Estimation (KDE) of the concentrations can approach the well-mixed solution with a limited number of particles. KDEs provide weighting functions of each particle mass that expands its region of influence, hence providing a wider region for chemical reactions with time. Simulation results show that KDEs are powerful tools to improve state-of-the-art simulations of chemical reactions and indicates that incomplete mixing in diluted systems should be modeled based on alternative conceptual models and not on a limited number of particles.

  2. A modified EM algorithm for estimation in generalized mixed models.

    Science.gov (United States)

    Steele, B M

    1996-12-01

    Application of the EM algorithm for estimation in the generalized mixed model has been largely unsuccessful because the E-step cannot be determined in most instances. The E-step computes the conditional expectation of the complete data log-likelihood and when the random effect distribution is normal, this expectation remains an intractable integral. The problem can be approached by numerical or analytic approximations; however, the computational burden imposed by numerical integration methods and the absence of an accurate analytic approximation have limited the use of the EM algorithm. In this paper, Laplace's method is adapted for analytic approximation within the E-step. The proposed algorithm is computationally straightforward and retains much of the conceptual simplicity of the conventional EM algorithm, although the usual convergence properties are not guaranteed. The proposed algorithm accommodates multiple random factors and random effect distributions besides the normal, e.g., the log-gamma distribution. Parameter estimates obtained for several data sets and through simulation show that this modified EM algorithm compares favorably with other generalized mixed model methods.

  3. KONVERGENSI ESTIMATOR DALAM MODEL MIXTURE BERBASIS MISSING DATA

    Directory of Open Access Journals (Sweden)

    N Dwidayati

    2014-11-01

    Full Text Available Model mixture dapat mengestimasi proporsi pasien yang sembuh (cured dan fungsi survival pasien tak sembuh (uncured. Pada kajian ini, model mixture dikembangkan untuk  analisis cure rate berbasis missing data. Ada beberapa metode yang dapat digunakan untuk analisis missing data.  Salah satu metode yang dapat digunakan adalah Algoritma EM, Metode ini didasarkan pada dua langkah, yaitu: (1 Expectation Step dan (2 Maximization Step. Algoritma EM merupakan pendekatan iterasi untuk mempelajari model dari data dengan nilai hilang melalui empat  langkah, yaitu(1 pilih himpunan inisial dari parameter untuk sebuah model, (2 tentukan nilai ekspektasi untuk data hilang, (3 buat induksi parameter model baru dari gabungan nilai ekspekstasi dan data asli, dan (4 jika parameter tidak converged, ulangi langkah 2 menggunakan model baru. Berdasar kajian yang dilakukan dapat ditunjukkan bahwa pada algoritma EM, log-likelihood untuk missing data  mengalami kenaikan setelah dilakukan setiap iterasi dari algoritmanya. Dengan demikian berdasar algoritma EM, barisan likelihood konvergen jika likelihood terbatas ke bawah. Model mixture can estimate the proportion of recovering (cured patients and function of survival but do not recover (uncured patients. In this study, a model mixture has been developed to analyze the curing rate based on missing data. There are some methods applicable to analyze missing data. One of the methods is EM Algorithm, This method is based on two (2 steps, i.e.: ( 1 Expectation Step and ( 2 Maximization Step. EM Algorithm is an iteration approach to study the model from data with missing values in four (4 steps, i.e. (1 to choose initial set from parameters for a model, ( 2 to determine the expectation value for missing data, ( 3 to make induction for the new model parameter from the combined expectation values and the original data, and ( 4 if parameter is not converged, repeat step 2 using new model. The current study indicated that for

  4. An empirical model to estimate ultraviolet erythemal transmissivity

    Science.gov (United States)

    Antón, M.; Serrano, A.; Cancillo, M. L.; García, J. A.

    2009-04-01

    An empirical model to estimate the solar ultraviolet erythemal irradiance (UVER) for all-weather conditions is presented. This model proposes a power expression with the UV transmissivity as a dependent variable, and the slant ozone column and the clearness index as independent variables. The UVER were measured at three stations in South-Western Spain during a five year period (2001-2005). A dataset corresponding to the period 2001-2004 was used to develop the model and an independent dataset (year 2005) for validation purposes. For all three locations, the empirical model explains more than 95% of UV transmissivity variability due to changes in the two independent variables. In addition, the coefficients of the models show that when the slant ozone amount decreases 1%, UV transmissivity and, therefore, UVER values increase approximately 1.33%-1.35%. The coefficients also show that when the clearness index decreases 1%, UV transmissivity increase 0.75%-0.78%. The validation of the model provided satisfactory results, with low mean absolute bias error (MABE), about 7%-8% for all stations. Finally, a one-day ahead forecast of the UV Index for cloud-free cases is presented, assuming the persistence in the total ozone column. The percentage of days with differences between forecast and experimental UVI lower than ±0.5 unit and ±1 unit is within the range of 28% to 37%, and 60% to 75%, respectively. Therefore, the empirical model proposed in this work provides reliable forecast cloud-free UVI in order to inform the public about the possible harmful effects of UV radiation over-exposure.

  5. An empirical model to estimate ultraviolet erythemal transmissivity

    Energy Technology Data Exchange (ETDEWEB)

    Anton, M.; Serrano, A.; Cancillo, M.L.; Garcia, J.A. [Universidad de Extremadura, Badajoz (Spain). Dept. de Fisica

    2009-07-01

    An empirical model to estimate the solar ultraviolet erythemal irradiance (UVER) for all-weather conditions is presented. This model proposes a power expression with the UV transmissivity as a dependent variable, and the slant ozone column and the clearness index as independent variables. The UVER were measured at three stations in South-Western Spain during a five year period (2001-2005). A dataset corresponding to the period 2001-2004 was used to develop the model and an independent dataset (year 2005) for validation purposes. For all three locations, the empirical model explains more than 95% of UV transmissivity variability due to changes in the two independent variables. In addition, the coefficients of the models show that when the slant ozone amount decreases 1%, UV transmissivity and, therefore, UVER values increase approximately 1.33%-1.35%. The coefficients also show that when the clearness index decreases 1%, UV transmissivity increase 0.75%-0.78%. The validation of the model provided satisfactory results, with low mean absolute bias error (MABE), about 7%-8% for all stations. Finally, a one-day ahead forecast of the UV Index for cloud-free cases is presented, assuming the persistence in the total ozone column. The percentage of days with differences between forecast and experimental UVI lower than {+-}0.5 unit and {+-}1 unit is within the range of 28% to 37%, and 60% to 75%, respectively. Therefore, the empirical model proposed in this work provides reliable forecast cloud-free UVI in order to inform the public about the possible harmful effects of UV radiation over-exposure. (orig.)

  6. Variational methods to estimate terrestrial ecosystem model parameters

    Science.gov (United States)

    Delahaies, Sylvain; Roulstone, Ian

    2016-04-01

    Carbon is at the basis of the chemistry of life. Its ubiquity in the Earth system is the result of complex recycling processes. Present in the atmosphere in the form of carbon dioxide it is adsorbed by marine and terrestrial ecosystems and stored within living biomass and decaying organic matter. Then soil chemistry and a non negligible amount of time transform the dead matter into fossil fuels. Throughout this cycle, carbon dioxide is released in the atmosphere through respiration and combustion of fossils fuels. Model-data fusion techniques allow us to combine our understanding of these complex processes with an ever-growing amount of observational data to help improving models and predictions. The data assimilation linked ecosystem carbon (DALEC) model is a simple box model simulating the carbon budget allocation for terrestrial ecosystems. Over the last decade several studies have demonstrated the relative merit of various inverse modelling strategies (MCMC, ENKF, 4DVAR) to estimate model parameters and initial carbon stocks for DALEC and to quantify the uncertainty in the predictions. Despite its simplicity, DALEC represents the basic processes at the heart of more sophisticated models of the carbon cycle. Using adjoint based methods we study inverse problems for DALEC with various data streams (8 days MODIS LAI, monthly MODIS LAI, NEE). The framework of constraint optimization allows us to incorporate ecological common sense into the variational framework. We use resolution matrices to study the nature of the inverse problems and to obtain data importance and information content for the different type of data. We study how varying the time step affect the solutions, and we show how "spin up" naturally improves the conditioning of the inverse problems.

  7. Modeling, State Estimation and Control of Unmanned Helicopters

    Science.gov (United States)

    Lau, Tak Kit

    Unmanned helicopters hold both tremendous potential and challenges. Without risking the lives of human pilots, these vehicles exhibit agile movement and the ability to hover and hence open up a wide range of applications in the hazardous situations. Sparing human lives, however, comes at a stiff price for technology. Some of the key difficulties that arise in these challenges are: (i) There are unexplained cross-coupled responses between the control axes on the hingeless helicopters that have puzzled researchers for years. (ii) Most, if not all, navigation on the unmanned helicopters relies on Global Navigation Satellite Systems (GNSSs), which are susceptible to jamming. (iii) It is often necessary to accommodate the re-configurations of the payload or the actuators on the helicopters by repeatedly tuning an autopilot, and that requires intensive human supervision and/or system identification. For the dynamics modeling and analysis, we present a comprehensive review on the helicopter actuation and dynamics, and contributes toward a more complete understanding on the on-axis and off-axis dynamical responses on the helicopter. We focus on a commonly used modeling technique, namely the phase-lag treatment, and employ a first-principles modeling method to justify that (i) why that phase-lag technique is inaccurate, (ii) how we can analyze the helicopter actuation and dynamics more accurately. Moreover, these dynamics modeling and analysis reveal the hard-to-measure but crucial parameters on a helicopter model that require the constant identifications, and hence convey the reasoning of seeking a model-implicit method to solve the state estimation and control problems on the unmanned helicopters. For the state estimation, we present a robust localization method for the unmanned helicopter against the GNSS outage. This method infers position from the acceleration measurement from an inertial measurement unit (IMU). In the core of our method are techniques of the sensor

  8. Re-evaluating neonatal-age models for ungulates: Does model choice affect survival estimates?

    Science.gov (United States)

    Grovenburg, Troy W.; Monteith, Kevin L.; Jacques, Christopher N.; Klaver, Robert W.; DePerno, Christopher S.; Brinkman, Todd J.; Monteith, Kyle B.; Gilbert, Sophie L.; Smith, Joshua B.; Bleich, Vernon C.; Swanson, Christopher C.; Jenks, Jonathan A.

    2014-01-01

    New-hoof growth is regarded as the most reliable metric for predicting age of newborn ungulates, but variation in estimated age among hoof-growth equations that have been developed may affect estimates of survival in staggered-entry models. We used known-age newborns to evaluate variation in age estimates among existing hoof-growth equations and to determine the consequences of that variation on survival estimates. During 2001–2009, we captured and radiocollared 174 newborn (≤24-hrs old) ungulates: 76 white-tailed deer (Odocoileus virginianus) in Minnesota and South Dakota, 61 mule deer (O. hemionus) in California, and 37 pronghorn (Antilocapra americana) in South Dakota. Estimated age of known-age newborns differed among hoof-growth models and varied by >15 days for white-tailed deer, >20 days for mule deer, and >10 days for pronghorn. Accuracy (i.e., the proportion of neonates assigned to the correct age) in aging newborns using published equations ranged from 0.0% to 39.4% in white-tailed deer, 0.0% to 3.3% in mule deer, and was 0.0% for pronghorns. Results of survival modeling indicated that variability in estimates of age-at-capture affected short-term estimates of survival (i.e., 30 days) for white-tailed deer and mule deer, and survival estimates over a longer time frame (i.e., 120 days) for mule deer. Conversely, survival estimates for pronghorn were not affected by estimates of age. Our analyses indicate that modeling survival in daily intervals is too fine a temporal scale when age-at-capture is unknown given the potential inaccuracies among equations used to estimate age of neonates. Instead, weekly survival intervals are more appropriate because most models accurately predicted ages within 1 week of the known age. Variation among results of neonatal-age models on short- and long-term estimates of survival for known-age young emphasizes the importance of selecting an appropriate hoof-growth equation and appropriately defining intervals (i.e., weekly

  9. Re-Evaluating Neonatal-Age Models for Ungulates: Does Model Choice Affect Survival Estimates?

    Science.gov (United States)

    Grovenburg, Troy W.; Monteith, Kevin L.; Jacques, Christopher N.; Klaver, Robert W.; DePerno, Christopher S.; Brinkman, Todd J.; Monteith, Kyle B.; Gilbert, Sophie L.; Smith, Joshua B.; Bleich, Vernon C.; Swanson, Christopher C.; Jenks, Jonathan A.

    2014-01-01

    New-hoof growth is regarded as the most reliable metric for predicting age of newborn ungulates, but variation in estimated age among hoof-growth equations that have been developed may affect estimates of survival in staggered-entry models. We used known-age newborns to evaluate variation in age estimates among existing hoof-growth equations and to determine the consequences of that variation on survival estimates. During 2001–2009, we captured and radiocollared 174 newborn (≤24-hrs old) ungulates: 76 white-tailed deer (Odocoileus virginianus) in Minnesota and South Dakota, 61 mule deer (O. hemionus) in California, and 37 pronghorn (Antilocapra americana) in South Dakota. Estimated age of known-age newborns differed among hoof-growth models and varied by >15 days for white-tailed deer, >20 days for mule deer, and >10 days for pronghorn. Accuracy (i.e., the proportion of neonates assigned to the correct age) in aging newborns using published equations ranged from 0.0% to 39.4% in white-tailed deer, 0.0% to 3.3% in mule deer, and was 0.0% for pronghorns. Results of survival modeling indicated that variability in estimates of age-at-capture affected short-term estimates of survival (i.e., 30 days) for white-tailed deer and mule deer, and survival estimates over a longer time frame (i.e., 120 days) for mule deer. Conversely, survival estimates for pronghorn were not affected by estimates of age. Our analyses indicate that modeling survival in daily intervals is too fine a temporal scale when age-at-capture is unknown given the potential inaccuracies among equations used to estimate age of neonates. Instead, weekly survival intervals are more appropriate because most models accurately predicted ages within 1 week of the known age. Variation among results of neonatal-age models on short- and long-term estimates of survival for known-age young emphasizes the importance of selecting an appropriate hoof-growth equation and appropriately defining intervals (i.e., weekly

  10. New aerial survey and hierarchical model to estimate manatee abundance

    Science.gov (United States)

    Langimm, Cahterine A.; Dorazio, Robert M.; Stith, Bradley M.; Doyle, Terry J.

    2011-01-01

    Monitoring the response of endangered and protected species to hydrological restoration is a major component of the adaptive management framework of the Comprehensive Everglades Restoration Plan. The endangered Florida manatee (Trichechus manatus latirostris) lives at the marine-freshwater interface in southwest Florida and is likely to be affected by hydrologic restoration. To provide managers with prerestoration information on distribution and abundance for postrestoration comparison, we developed and implemented a new aerial survey design and hierarchical statistical model to estimate and map abundance of manatees as a function of patch-specific habitat characteristics, indicative of manatee requirements for offshore forage (seagrass), inland fresh drinking water, and warm-water winter refuge. We estimated the number of groups of manatees from dual-observer counts and estimated the number of individuals within groups by removal sampling. Our model is unique in that we jointly analyzed group and individual counts using assumptions that allow probabilities of group detection to depend on group size. Ours is the first analysis of manatee aerial surveys to model spatial and temporal abundance of manatees in association with habitat type while accounting for imperfect detection. We conducted the study in the Ten Thousand Islands area of southwestern Florida, USA, which was expected to be affected by the Picayune Strand Restoration Project to restore hydrology altered for a failed real-estate development. We conducted 11 surveys in 2006, spanning the cold, dry season and warm, wet season. To examine short-term and seasonal changes in distribution we flew paired surveys 1–2 days apart within a given month during the year. Manatees were sparsely distributed across the landscape in small groups. Probability of detection of a group increased with group size; the magnitude of the relationship between group size and detection probability varied among surveys. Probability

  11. Why Does a Kronecker Model Result in Misleading Capacity Estimates?

    CERN Document Server

    Raghavan, Vasanthan; Sayeed, Akbar M

    2008-01-01

    Many recent works that study the performance of multi-input multi-output (MIMO) systems in practice assume a Kronecker model where the variances of the channel entries, upon decomposition on to the transmit and the receive eigen-bases, admit a separable form. Measurement campaigns, however, show that the Kronecker model results in poor estimates for capacity. Motivated by these observations, a channel model that does not impose a separable structure has been recently proposed and shown to fit the capacity of measured channels better. In this work, we show that this recently proposed modeling framework can be viewed as a natural consequence of channel decomposition on to its canonical coordinates, the transmit and/or the receive eigen-bases. Using tools from random matrix theory, we then establish the theoretical basis behind the Kronecker mismatch at the low- and the high-SNR extremes: 1) Sparsity of the dominant statistical degrees of freedom (DoF) in the true channel at the low-SNR extreme, and 2) Non-regul...

  12. Estimating joint kinematics from skin motion observation: modelling and validation.

    Science.gov (United States)

    Wolf, Alon; Senesh, Merav

    2011-11-01

    Modelling of soft tissue motion is required in many areas, such as computer animation, surgical simulation, 3D motion analysis and gait analysis. In this paper, we will focus on the use of modelling of skin deformation during 3D motion analysis. The most frequently used method in 3D human motion analysis involves placing markers on the skin of the analysed segment which is composed of the rigid bone and the surrounding soft tissues. Skin and soft tissue deformations introduce a significant artefact which strongly influences the resulting bone position, orientation and joint kinematics. For this study, we used a statistical solid dynamics approach which is a combination of several previously reported tools: the point cluster technique (PCT) and a Kalman filter which was added to the PCT. The methods were tested and evaluated on controlled human-arm motions, using an optical motion capture system (Vicon(TM)). The addition of a Kalman filter to the PCT for rigid body motion estimation results in a smoother signal that better represents the joint motion. Calculations indicate less signal distortion than when using a digital low-pass filter. Furthermore, adding a Kalman filter to the PCT substantially reduces the dispersion of the maximal and minimal instantaneous frequencies. For controlled human movements, the result indicated that adding a Kalman filter to the PCT produced a more accurate signal. However, it could not be concluded that the proposed Kalman filter is better than a low-pass filter for estimation of the motion. We suggest that implementation of a Kalman filter with a better biomechanical motion model will be more likely to improve the results.

  13. Hierarchical set of models to estimate soil thermal diffusivity

    Science.gov (United States)

    Arkhangelskaya, Tatiana; Lukyashchenko, Ksenia

    2016-04-01

    Soil thermal properties significantly affect the land-atmosphere heat exchange rates. Intra-soil heat fluxes depend both on temperature gradients and soil thermal conductivity. Soil temperature changes due to energy fluxes are determined by soil specific heat. Thermal diffusivity is equal to thermal conductivity divided by volumetric specific heat and reflects both the soil ability to transfer heat and its ability to change temperature when heat is supplied or withdrawn. The higher soil thermal diffusivity is, the thicker is the soil/ground layer in which diurnal and seasonal temperature fluctuations are registered and the smaller are the temperature fluctuations at the soil surface. Thermal diffusivity vs. moisture dependencies for loams, sands and clays of the East European Plain were obtained using the unsteady-state method. Thermal diffusivity of different soils differed greatly, and for a given soil it could vary by 2, 3 or even 5 times depending on soil moisture. The shapes of thermal diffusivity vs. moisture dependencies were different: peak curves were typical for sandy soils and sigmoid curves were typical for loamy and especially for compacted soils. The lowest thermal diffusivities and the smallest range of their variability with soil moisture were obtained for clays with high humus content. Hierarchical set of models will be presented, allowing an estimate of soil thermal diffusivity from available data on soil texture, moisture, bulk density and organic carbon. When developing these models the first step was to parameterize the experimental thermal diffusivity vs. moisture dependencies with a 4-parameter function; the next step was to obtain regression formulas to estimate the function parameters from available data on basic soil properties; the last step was to evaluate the accuracy of suggested models using independent data on soil thermal diffusivity. The simplest models were based on soil bulk density and organic carbon data and provided different

  14. Propagating Uncertainties from Source Model Estimations to Coulomb Stress Changes

    Science.gov (United States)

    Baumann, C.; Jonsson, S.; Woessner, J.

    2009-12-01

    Multiple studies have shown that static stress changes due to permanent fault displacement trigger earthquakes on the causative and on nearby faults. Calculations of static stress changes in previous studies have been based on fault parameters without considering any source model uncertainties or with crude assumptions about fault model errors based on available different source models. In this study, we investigate the influence of fault model parameter uncertainties on Coulomb Failure Stress change (ΔCFS) calculations by propagating the uncertainties from the fault estimation process to the Coulomb Failure stress changes. We use 2500 sets of correlated model parameters determined for the June 2000 Mw = 5.8 Kleifarvatn earthquake, southwest Iceland, which were estimated by using a repeated optimization procedure and multiple data sets that had been modified by synthetic noise. The model parameters show that the event was predominantly a right-lateral strike-slip earthquake on a north-south striking fault. The variability of the sets of models represents the posterior probability density distribution for the Kleifarvatn source model. First we investigate the influence of individual source model parameters on the ΔCFS calculations. We show through a correlation analysis that for this event, changes in dip, east location, strike, width and in part north location have stronger impact on the Coulomb failure stress changes than changes in fault length, depth, dip-slip and strike-slip. Second we find that the accuracy of Coulomb failure stress changes appears to increase with increasing distance from the fault. The absolute value of the standard deviation decays rapidly with distance within about 5-6 km around the fault from about 3-3.5 MPa down to a few Pa, implying that the influence of parameter changes decrease with increasing distance. This is underlined by the coefficient of variation CV, defined as the ratio of the standard deviation of the Coulomb stress

  15. USER STORY SOFTWARE ESTIMATION:A SIMPLIFICATION OF SOFTWARE ESTIMATION MODEL WITH DISTRIBUTED EXTREME PROGRAMMING ESTIMATION TECHNIQUE

    National Research Council Canada - National Science Library

    Ridi Ferdiana; Paulus Insap Santoso; Lukito Edi Nugroho; Ahmad Ashari

    2011-01-01

    Software estimation is an area of software engineering concerned with the identification, classification and measurement of features of software that affect the cost of developing and sustaining computer programs [19...

  16. Bayesian model comparison and model averaging for small-area estimation

    OpenAIRE

    Aitkin, Murray; Liu, Charles C.; Chadwick, Tom

    2009-01-01

    This paper considers small-area estimation with lung cancer mortality data, and discusses the choice of upper-level model for the variation over areas. Inference about the random effects for the areas may depend strongly on the choice of this model, but this choice is not a straightforward matter. We give a general methodology for both evaluating the data evidence for different models and averaging over plausible models to give robust area effect distributions. We reanalyze the data of Tsutak...

  17. Estimation of unemployment rates using small area estimation model by combining time series and cross-sectional data

    Science.gov (United States)

    Muchlisoh, Siti; Kurnia, Anang; Notodiputro, Khairil Anwar; Mangku, I. Wayan

    2016-02-01

    Labor force surveys conducted over time by the rotating panel design have been carried out in many countries, including Indonesia. Labor force survey in Indonesia is regularly conducted by Statistics Indonesia (Badan Pusat Statistik-BPS) and has been known as the National Labor Force Survey (Sakernas). The main purpose of Sakernas is to obtain information about unemployment rates and its changes over time. Sakernas is a quarterly survey. The quarterly survey is designed only for estimating the parameters at the provincial level. The quarterly unemployment rate published by BPS (official statistics) is calculated based on only cross-sectional methods, despite the fact that the data is collected under rotating panel design. The study purpose to estimate a quarterly unemployment rate at the district level used small area estimation (SAE) model by combining time series and cross-sectional data. The study focused on the application and comparison between the Rao-Yu model and dynamic model in context estimating the unemployment rate based on a rotating panel survey. The goodness of fit of both models was almost similar. Both models produced an almost similar estimation and better than direct estimation, but the dynamic model was more capable than the Rao-Yu model to capture a heterogeneity across area, although it was reduced over time.

  18. Post-stratification sampling in small area estimation (SAE) model for unemployment rate estimation by Bayes approach

    Science.gov (United States)

    Hanike, Yusrianti; Sadik, Kusman; Kurnia, Anang

    2016-02-01

    This research implemented unemployment rate in Indonesia that based on Poisson distribution. It would be estimated by modified the post-stratification and Small Area Estimation (SAE) model. Post-stratification was one of technique sampling that stratified after collected survey data. It's used when the survey data didn't serve for estimating the interest area. Interest area here was the education of unemployment which separated in seven category. The data was obtained by Labour Employment National survey (Sakernas) that's collected by company survey in Indonesia, BPS, Statistic Indonesia. This company served the national survey that gave too small sample for level district. Model of SAE was one of alternative to solved it. According the problem above, we combined this post-stratification sampling and SAE model. This research gave two main model of post-stratification sampling. Model I defined the category of education was the dummy variable and model II defined the category of education was the area random effect. Two model has problem wasn't complied by Poisson assumption. Using Poisson-Gamma model, model I has over dispersion problem was 1.23 solved to 0.91 chi square/df and model II has under dispersion problem was 0.35 solved to 0.94 chi square/df. Empirical Bayes was applied to estimate the proportion of every category education of unemployment. Using Bayesian Information Criteria (BIC), Model I has smaller mean square error (MSE) than model II.

  19. Aqueous and Tissue Residue-Based Interspecies Correlation Estimation Models Provide Conservative Hazard Estimates for Aromatic Compounds

    Science.gov (United States)

    Interspecies correlation estimation (ICE) models were developed for 30 nonpolar aromatic compounds to allow comparison of prediction accuracy between 2 data compilation approaches. Type 1 models used data combined across studies, and type 2 models used data combined only within s...

  20. Estimating Structural Models of Corporate Bond Prices in Indonesian Corporations

    Directory of Open Access Journals (Sweden)

    Lenny Suardi

    2014-08-01

    Full Text Available This  paper  applies  the  maximum  likelihood  (ML  approaches  to  implementing  the structural  model  of  corporate  bond,  as  suggested  by  Li  and  Wong  (2008,  in  Indonesian corporations.  Two  structural  models,  extended  Merton  and  Longstaff  &  Schwartz  (LS models,  are  used  in  determining  these  prices,  yields,  yield  spreads  and  probabilities  of default. ML estimation is used to determine the volatility of irm value. Since irm value is unobserved variable, Duan (1994 suggested that the irst step of ML estimation is to derive the likelihood function for equity as the option on the irm value. The second step is to ind parameters such as the drift and volatility of irm value, that maximizing this function. The irm value itself is extracted by equating the pricing formula to the observed equity prices. Equity,  total  liabilities,  bond  prices  data  and  the  irm's  parameters  (irm  value,  volatility of irm value, and default barrier are substituted to extended Merton and LS bond pricing formula in order to valuate the corporate bond.These models are implemented to a sample of 24 bond prices in Indonesian corporation during  period  of  2001-2005,  based  on  criteria  of  Eom,  Helwege  and  Huang  (2004.  The equity  and  bond  prices  data  were  obtained  from  Indonesia  Stock  Exchange  for  irms  that issued equity and provided regular inancial statement within this period. The result shows that both models, in average, underestimate the bond prices and overestimate the yields and yield spread. ";} // -->activate javascript

  1. Data Sources for the Model-based Small Area Estimates of Cancer-Related Knowledge - Small Area Estimates

    Science.gov (United States)

    The model-based estimates of important cancer risk factors and screening behaviors are obtained by combining the responses to the Behavioral Risk Factor Surveillance System (BRFSS) and the National Health Interview Survey (NHIS).

  2. Data Sources for the Model-based Small Area Estimates of Cancer Risk Factors and Screening Behaviors - Small Area Estimates

    Science.gov (United States)

    The model-based estimates of important cancer risk factors and screening behaviors are obtained by combining the responses to the Behavioral Risk Factor Surveillance System (BRFSS) and the National Health Interview Survey (NHIS).

  3. Estimating model parameters in nonautonomous chaotic systems using synchronization

    Science.gov (United States)

    Yang, Xiaoli; Xu, Wei; Sun, Zhongkui

    2007-05-01

    In this Letter, a technique is addressed for estimating unknown model parameters of multivariate, in particular, nonautonomous chaotic systems from time series of state variables. This technique uses an adaptive strategy for tracking unknown parameters in addition to a linear feedback coupling for synchronizing systems, and then some general conditions, by means of the periodic version of the LaSalle invariance principle for differential equations, are analytically derived to ensure precise evaluation of unknown parameters and identical synchronization between the concerned experimental system and its corresponding receiver one. Exemplifies are presented by employing a parametrically excited 4D new oscillator and an additionally excited Ueda oscillator. The results of computer simulations reveal that the technique not only can quickly track the desired parameter values but also can rapidly respond to changes in operating parameters. In addition, the technique can be favorably robust against the effect of noise when the experimental system is corrupted by bounded disturbance and the normalized absolute error of parameter estimation grows almost linearly with the cutoff value of noise strength in simulation.

  4. Estimating model parameters in nonautonomous chaotic systems using synchronization

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Xiaoli [Department of Applied Mathematics, Northwestern Polytechnical University, Xi' an 710072 (China)]. E-mail: yangxl205@mail.nwpu.edu.cn; Xu, Wei [Department of Applied Mathematics, Northwestern Polytechnical University, Xi' an 710072 (China); Sun, Zhongkui [Department of Applied Mathematics, Northwestern Polytechnical University, Xi' an 710072 (China)

    2007-05-07

    In this Letter, a technique is addressed for estimating unknown model parameters of multivariate, in particular, nonautonomous chaotic systems from time series of state variables. This technique uses an adaptive strategy for tracking unknown parameters in addition to a linear feedback coupling for synchronizing systems, and then some general conditions, by means of the periodic version of the LaSalle invariance principle for differential equations, are analytically derived to ensure precise evaluation of unknown parameters and identical synchronization between the concerned experimental system and its corresponding receiver one. Exemplifies are presented by employing a parametrically excited 4D new oscillator and an additionally excited Ueda oscillator. The results of computer simulations reveal that the technique not only can quickly track the desired parameter values but also can rapidly respond to changes in operating parameters. In addition, the technique can be favorably robust against the effect of noise when the experimental system is corrupted by bounded disturbance and the normalized absolute error of parameter estimation grows almost linearly with the cutoff value of noise strength in simulation.

  5. Parameter estimation and hypothesis testing in linear models

    CERN Document Server

    Koch, Karl-Rudolf

    1999-01-01

    The necessity to publish the second edition of this book arose when its third German edition had just been published. This second English edition is there­ fore a translation of the third German edition of Parameter Estimation and Hypothesis Testing in Linear Models, published in 1997. It differs from the first English edition by the addition of a new chapter on robust estimation of parameters and the deletion of the section on discriminant analysis, which has been more completely dealt with by the author in the book Bayesian In­ ference with Geodetic Applications, Springer-Verlag, Berlin Heidelberg New York, 1990. Smaller additions and deletions have been incorporated, to im­ prove the text, to point out new developments or to eliminate errors which became apparent. A few examples have been also added. I thank Springer-Verlag for publishing this second edition and for the assistance in checking the translation, although the responsibility of errors remains with the author. I also want to express my thanks...

  6. Accelerated gravitational wave parameter estimation with reduced order modeling.

    Science.gov (United States)

    Canizares, Priscilla; Field, Scott E; Gair, Jonathan; Raymond, Vivien; Smith, Rory; Tiglio, Manuel

    2015-02-20

    Inferring the astrophysical parameters of coalescing compact binaries is a key science goal of the upcoming advanced LIGO-Virgo gravitational-wave detector network and, more generally, gravitational-wave astronomy. However, current approaches to parameter estimation for these detectors require computationally expensive algorithms. Therefore, there is a pressing need for new, fast, and accurate Bayesian inference techniques. In this Letter, we demonstrate that a reduced order modeling approach enables rapid parameter estimation to be performed. By implementing a reduced order quadrature scheme within the LIGO Algorithm Library, we show that Bayesian inference on the 9-dimensional parameter space of nonspinning binary neutron star inspirals can be sped up by a factor of ∼30 for the early advanced detectors' configurations (with sensitivities down to around 40 Hz) and ∼70 for sensitivities down to around 20 Hz. This speedup will increase to about 150 as the detectors improve their low-frequency limit to 10 Hz, reducing to hours analyses which could otherwise take months to complete. Although these results focus on interferometric gravitational wave detectors, the techniques are broadly applicable to any experiment where fast Bayesian analysis is desirable.

  7. Comparison of ET estimations by the three-temperature model, SEBAL model and eddy covariance observations

    Science.gov (United States)

    Zhou, Xinyao; Bi, Shaojie; Yang, Yonghui; Tian, Fei; Ren, Dandan

    2014-11-01

    The three-temperature (3T) model is a simple model which estimates plant transpiration from only temperature data. In-situ field experimental results have shown that 3T is a reliable evapotranspiration (ET) estimation model. Despite encouraging results from recent efforts extending the 3T model to remote sensing applications, literature shows limited comparisons of the 3T model with other remote sensing driven ET models. This research used ET obtained from eddy covariance to evaluate the 3T model and in turn compared the model-simulated ET with that of the more traditional SEBAL (Surface Energy Balance Algorithm for Land) model. A field experiment was conducted in the cotton fields of Taklamakan desert oasis in Xinjiang, Northwest China. Radiation and surface temperature were obtained from hyperspectral and thermal infrared images for clear days in 2013. The images covered the time period of 0900-1800 h at four different phenological stages of cotton. Meteorological data were automatically recorded in a station located at the center of the cotton field. Results showed that the 3T model accurately captured daily and seasonal variations in ET. As low dry soil surface temperatures induced significant errors in the 3T model, it was unsuitable for estimating ET in the early morning and late afternoon periods. The model-simulated ET was relatively more accurate for squaring, bolling and boll-opening stages than for seedling stage of cotton during when ET was generally low. Wind speed was apparently not a limiting factor of ET in the 3T model. This was attributed to the fact that surface temperature, a vital input of the model, indirectly accounted for the effect of wind speed on ET. Although the 3T model slightly overestimated ET compared with SEBAL and eddy covariance, it was generally reliable for estimating daytime ET during 0900-1600 h.

  8. Simple parameter estimation for complex models — Testing evolutionary techniques on 3-dimensional biogeochemical ocean models

    Science.gov (United States)

    Mattern, Jann Paul; Edwards, Christopher A.

    2017-01-01

    Parameter estimation is an important part of numerical modeling and often required when a coupled physical-biogeochemical ocean model is first deployed. However, 3-dimensional ocean model simulations are computationally expensive and models typically contain upwards of 10 parameters suitable for estimation. Hence, manual parameter tuning can be lengthy and cumbersome. Here, we present four easy to implement and flexible parameter estimation techniques and apply them to two 3-dimensional biogeochemical models of different complexities. Based on a Monte Carlo experiment, we first develop a cost function measuring the model-observation misfit based on multiple data types. The parameter estimation techniques are then applied and yield a substantial cost reduction over ∼ 100 simulations. Based on the outcome of multiple replicate experiments, they perform on average better than random, uninformed parameter search but performance declines when more than 40 parameters are estimated together. Our results emphasize the complex cost function structure for biogeochemical parameters and highlight dependencies between different parameters as well as different cost function formulations.

  9. Estimating Agricultural Losses using Flood Modeling for Rural Area

    Directory of Open Access Journals (Sweden)

    Muhadi Nur Atirah

    2017-01-01

    Full Text Available Flooding is the most significant natural hazard in Malaysia in terms of population affected, frequency, flood extent, flood duration and social economic damage. Flooding causes loss of lives, injuries, property damage and leave some economic damage to the country especially when it occurs in a rural area where the main income is dependent on agricultural area. This study focused on flooding in oil palm plantations, rubber plantations and fruits and vegetables area. InfoWorks ICM was used to develop a flood model to study the impact of flooding and to mitigate the floods using a retention pond. Later, Geographical Information System (GIS together with the flood model were used for the analysis on flood damage assessment and management of flood risk. The estimated total damage for three different flood event; 10 ARI, 50 ARI and 100 ARI involved millions of ringgits. In reducing the flood impact along the Selangor River, retention pond was suggested, modeled and tested. By constructing retention pond, flood extents in agricultural area were reduced significantly by 60.49% for 10 ARI, 45.39% for 50 ARI and 46.54% for 100 ARI.

  10. Degeneracy estimation in interference models on wireless networks

    Science.gov (United States)

    McBride, Neal; Bulava, John; Galiotto, Carlo; Marchetti, Nicola; Macaluso, Irene; Doyle, Linda

    2017-03-01

    We present a Monte Carlo study of interference in real-world wireless networks using the Potts model. Our approach maps the Potts energy to discrete interference levels. These levels depend on the configurations of radio frequency allocation in the network. For the first time, we estimate the degeneracy of these interference levels using the Wang-Landau algorithm. The cumulative distribution function of the resulting density of states is found to increase rapidly at a critical interference value. We compare these critical values for several different real-world interference networks and Potts models. Our results show that models with a greater number of available frequency channels and less dense interference networks result in the majority of configurations having lower interference levels. Consequently, their critical interference levels occur at lower values. Furthermore, the area under the density of states increases and shifts to lower interference values. Therefore, the probability of randomly sampling low interference configurations is higher under these conditions. This result can be used to consider dynamic and distributed spectrum allocation in future wireless networks.

  11. Porosity estimation of aged mortar using a micromechanical model.

    Science.gov (United States)

    Hernández, M G; Anaya, J J; Sanchez, T; Segura, I

    2006-12-22

    Degradation of concrete structures located in high humidity atmospheres or under flowing water is a very important problem. In this study, a method for ultrasonic non-destructive characterization in aged mortar is presented. The proposed method makes a prediction of the behaviour of aged mortar accomplished with a three phase micromechanical model using ultrasonic measurements. Aging mortar was accelerated by immersing the probes in ammonium nitrate solution. Both destructive and non-destructive characterization of mortar was performed. Destructive tests of porosity were performed using a vacuum saturation method and non-destructive characterization was carried out using ultrasonic velocities. Aging experiments show that mortar degradation not only involves a porosity increase, but also microstructural changes in the cement matrix. Experimental results show that the estimated porosity using the proposed non-destructive methodology had a comparable performance to classical destructive techniques.

  12. FASTSim: A Model to Estimate Vehicle Efficiency, Cost and Performance

    Energy Technology Data Exchange (ETDEWEB)

    Brooker, A.; Gonder, J.; Wang, L.; Wood, E.; Lopp, S.; Ramroth, L.

    2015-05-04

    The Future Automotive Systems Technology Simulator (FASTSim) is a high-level advanced vehicle powertrain systems analysis tool supported by the U.S. Department of Energy’s Vehicle Technologies Office. FASTSim provides a quick and simple approach to compare powertrains and estimate the impact of technology improvements on light- and heavy-duty vehicle efficiency, performance, cost, and battery batches of real-world drive cycles. FASTSim’s calculation framework and balance among detail, accuracy, and speed enable it to simulate thousands of driven miles in minutes. The key components and vehicle outputs have been validated by comparing the model outputs to test data for many different vehicles to provide confidence in the results. A graphical user interface makes FASTSim easy and efficient to use. FASTSim is freely available for download from the National Renewable Energy Laboratory’s website (see www.nrel.gov/fastsim).

  13. An improved model for estimating pesticide emissions for agricultural LCA

    DEFF Research Database (Denmark)

    Dijkman, Teunis Johannes; Birkved, Morten; Hauschild, Michael Zwicky

    2011-01-01

    Credible quantification of chemical emissions in the inventory phase of Life Cycle Assessment (LCA) is crucial since chemicals are the dominating cause of the human and ecotoxicity-related environmental impacts in Life Cycle Impact Assessment (LCIA). When applying LCA for assessment of agricultural...... products, off-target pesticide emissions need to be quantified as accurately as possible because of the considerable toxicity effects associated with chemicals designed to have a high impact on biological organisms like for example insects or weed plants. PestLCI was developed to estimate the fractions...... of the applied pesticide that is emitted from a field to the surrounding environmental compartments: air, surface water, and ground water. However, the applicability of the model has been limited to 1 typical Danish soil type and 1 climatic profile obtained from the national Danish meteorological station...

  14. Parameter estimation in a spatial unit root autoregressive model

    CERN Document Server

    Baran, Sándor

    2011-01-01

    Spatial autoregressive model $X_{k,\\ell}=\\alpha X_{k-1,\\ell}+\\beta X_{k,\\ell-1}+\\gamma X_{k-1,\\ell-1}+\\epsilon_{k,\\ell}$ is investigated in the unit root case, that is when the parameters are on the boundary of the domain of stability that forms a tetrahedron with vertices $(1,1,-1), \\ (1,-1,1),\\ (-1,1,1)$ and $(-1,-1,-1)$. It is shown that the limiting distribution of the least squares estimator of the parameters is normal and the rate of convergence is $n$ when the parameters are in the faces or on the edges of the tetrahedron, while on the vertices the rate is $n^{3/2}$.

  15. A financial planning model for estimating hospital debt capacity.

    Science.gov (United States)

    Hopkins, D S; Heath, D; Levin, P J

    1982-01-01

    A computer-based financial planning model was formulated to measure the impact of a major capital improvement project on the fiscal health of Stanford University Hospital. The model had to be responsive to many variables and easy to use, so as to allow for the testing of numerous alternatives. Special efforts were made to identify the key variables that needed to be presented in the model and to include all known links between capital investment, debt, and hospital operating expenses. Growth in the number of patient days of care was singled out as a major source of uncertainty that would have profound effects on the hospital's finances. Therefore this variable was subjected to special scrutiny in terms of efforts to gauge expected demographic trends and market forces. In addition, alternative base runs of the model were made under three distinct patient-demand assumptions. Use of the model enabled planners at the Stanford University Hospital (a) to determine that a proposed modernization plan was financially feasible under a reasonable (that is, not unduly optimistic) set of assumptions and (b) to examine the major sources of risk. Other than patient demand, these sources were found to be gross revenues per patient, operating costs, and future limitations on government reimbursement programs. When the likely financial consequences of these risks were estimated, both separately and in combination, it was determined that even if two or more assumptions took a somewhat more negative turn than was expected, the hospital would be able to offset adverse consequences by a relatively minor reduction in operating costs. PMID:7111658

  16. A financial planning model for estimating hospital debt capacity.

    Science.gov (United States)

    Hopkins, D S; Heath, D; Levin, P J

    1982-01-01

    A computer-based financial planning model was formulated to measure the impact of a major capital improvement project on the fiscal health of Stanford University Hospital. The model had to be responsive to many variables and easy to use, so as to allow for the testing of numerous alternatives. Special efforts were made to identify the key variables that needed to be presented in the model and to include all known links between capital investment, debt, and hospital operating expenses. Growth in the number of patient days of care was singled out as a major source of uncertainty that would have profound effects on the hospital's finances. Therefore this variable was subjected to special scrutiny in terms of efforts to gauge expected demographic trends and market forces. In addition, alternative base runs of the model were made under three distinct patient-demand assumptions. Use of the model enabled planners at the Stanford University Hospital (a) to determine that a proposed modernization plan was financially feasible under a reasonable (that is, not unduly optimistic) set of assumptions and (b) to examine the major sources of risk. Other than patient demand, these sources were found to be gross revenues per patient, operating costs, and future limitations on government reimbursement programs. When the likely financial consequences of these risks were estimated, both separately and in combination, it was determined that even if two or more assumptions took a somewhat more negative turn than was expected, the hospital would be able to offset adverse consequences by a relatively minor reduction in operating costs.

  17. State-space models' dirty little secrets: even simple linear Gaussian models can have estimation problems.

    Science.gov (United States)

    Auger-Méthé, Marie; Field, Chris; Albertsen, Christoffer M; Derocher, Andrew E; Lewis, Mark A; Jonsen, Ian D; Mills Flemming, Joanna

    2016-05-25

    State-space models (SSMs) are increasingly used in ecology to model time-series such as animal movement paths and population dynamics. This type of hierarchical model is often structured to account for two levels of variability: biological stochasticity and measurement error. SSMs are flexible. They can model linear and nonlinear processes using a variety of statistical distributions. Recent ecological SSMs are often complex, with a large number of parameters to estimate. Through a simulation study, we show that even simple linear Gaussian SSMs can suffer from parameter- and state-estimation problems. We demonstrate that these problems occur primarily when measurement error is larger than biological stochasticity, the condition that often drives ecologists to use SSMs. Using an animal movement example, we show how these estimation problems can affect ecological inference. Biased parameter estimates of a SSM describing the movement of polar bears (Ursus maritimus) result in overestimating their energy expenditure. We suggest potential solutions, but show that it often remains difficult to estimate parameters. While SSMs are powerful tools, they can give misleading results and we urge ecologists to assess whether the parameters can be estimated accurately before drawing ecological conclusions from their results.

  18. A unified framework for benchmark dose estimation applied to mixed models and model averaging

    DEFF Research Database (Denmark)

    Ritz, Christian; Gerhard, Daniel; Hothorn, Ludwig A.

    2013-01-01

    This article develops a framework for benchmark dose estimation that allows intrinsically nonlinear dose-response models to be used for continuous data in much the same way as is already possible for quantal data. This means that the same dose-response model equations may be applied to both...

  19. House thermal model parameter estimation method for Model Predictive Control applications

    NARCIS (Netherlands)

    van Leeuwen, Richard Pieter; de Wit, J.B.; Fink, J.; Smit, Gerardus Johannes Maria

    2015-01-01

    In this paper we investigate thermal network models with different model orders applied to various Dutch low-energy house types with high and low interior thermal mass and containing floor heating. Parameter estimations are performed by using data from TRNSYS simulations. The paper discusses results

  20. House thermal model parameter estimation method for Model Predictive Control applications

    NARCIS (Netherlands)

    van Leeuwen, Richard Pieter; de Wit, J.B.; Fink, J.; Smit, Gerardus Johannes Maria

    In this paper we investigate thermal network models with different model orders applied to various Dutch low-energy house types with high and low interior thermal mass and containing floor heating. Parameter estimations are performed by using data from TRNSYS simulations. The paper discusses results

  1. Constrained model predictive control, state estimation and coordination

    Science.gov (United States)

    Yan, Jun

    In this dissertation, we study the interaction between the control performance and the quality of the state estimation in a constrained Model Predictive Control (MPC) framework for systems with stochastic disturbances. This consists of three parts: (i) the development of a constrained MPC formulation that adapts to the quality of the state estimation via constraints; (ii) the application of such a control law in a multi-vehicle formation coordinated control problem in which each vehicle operates subject to a no-collision constraint posed by others' imperfect prediction computed from finite bit-rate, communicated data; (iii) the design of the predictors and the communication resource assignment problem that satisfy the performance requirement from Part (ii). Model Predictive Control (MPC) is of interest because it is one of the few control design methods which preserves standard design variables and yet handles constraints. MPC is normally posed as a full-state feedback control and is implemented in a certainty-equivalence fashion with best estimates of the states being used in place of the exact state. However, if the state constraints were handled in the same certainty-equivalence fashion, the resulting control law could drive the real state to violate the constraints frequently. Part (i) focuses on exploring the inclusion of state estimates into the constraints. It does this by applying constrained MPC to a system with stochastic disturbances. The stochastic nature of the problem requires re-posing the constraints in a probabilistic form. In Part (ii), we consider applying constrained MPC as a local control law in a coordinated control problem of a group of distributed autonomous systems. Interactions between the systems are captured via constraints. First, we inspect the application of constrained MPC to a completely deterministic case. Formation stability theorems are derived for the subsystems and conditions on the local constraint set are derived in order to

  2. Parameter estimation for models of ligninolytic and cellulolytic enzyme kinetics

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Gangsheng [ORNL; Post, Wilfred M [ORNL; Mayes, Melanie [ORNL; Frerichs, Joshua T [ORNL; Jagadamma, Sindhu [ORNL

    2012-01-01

    While soil enzymes have been explicitly included in the soil organic carbon (SOC) decomposition models, there is a serious lack of suitable data for model parameterization. This study provides well-documented enzymatic parameters for application in enzyme-driven SOC decomposition models from a compilation and analysis of published measurements. In particular, we developed appropriate kinetic parameters for five typical ligninolytic and cellulolytic enzymes ( -glucosidase, cellobiohydrolase, endo-glucanase, peroxidase, and phenol oxidase). The kinetic parameters included the maximum specific enzyme activity (Vmax) and half-saturation constant (Km) in the Michaelis-Menten equation. The activation energy (Ea) and the pH optimum and sensitivity (pHopt and pHsen) were also analyzed. pHsen was estimated by fitting an exponential-quadratic function. The Vmax values, often presented in different units under various measurement conditions, were converted into the same units at a reference temperature (20 C) and pHopt. Major conclusions are: (i) Both Vmax and Km were log-normal distributed, with no significant difference in Vmax exhibited between enzymes originating from bacteria or fungi. (ii) No significant difference in Vmax was found between cellulases and ligninases; however, there was significant difference in Km between them. (iii) Ligninases had higher Ea values and lower pHopt than cellulases; average ratio of pHsen to pHopt ranged 0.3 0.4 for the five enzymes, which means that an increase or decrease of 1.1 1.7 pH units from pHopt would reduce Vmax by 50%. (iv) Our analysis indicated that the Vmax values from lab measurements with purified enzymes were 1 2 orders of magnitude higher than those for use in SOC decomposition models under field conditions.

  3. Performance Analysis of Software Effort Estimation Models Using Neural Networks

    Directory of Open Access Journals (Sweden)

    P.Latha

    2013-08-01

    Full Text Available Software Effort estimation involves the estimation of effort required to develop software. Cost overrun, schedule overrun occur in the software development due to the wrong estimate made during the initial stage of software development. Proper estimation is very essential for successful completion of software development. Lot of estimation techniques available to estimate the effort in which neural network based estimation technique play a prominent role. Back propagation Network is the most widely used architecture. ELMAN neural network a recurrent type network can be used on par with Back propagation Network. For a good predictor system the difference between estimated effort and actual effort should be as low as possible. Data from historic project of NASA is used for training and testing. The experimental Results confirm that Back propagation algorithm is efficient than Elman neural network.

  4. Percolation models of turbulent transport and scaling estimates

    Energy Technology Data Exchange (ETDEWEB)

    Bakunin, O.G. [FOM Instituut voor Plasmafysica ' Rijnhuizen' , Associate Euroatom-FOM, 3430 BE Nieuwegein (Netherlands) and Kurchatov Institute, Nuclear Fusion Institute, Kurchatov sq. 1, 123182 Moscow (Russian Federation)]. E-mail: oleg_bakunin@yahoo.com

    2005-03-01

    The variety of forms of turbulent transport requires not only special description methods, but also an analysis of general mechanisms. One such mechanism is the percolation transport. The percolation approach is based on fractality and scaling ideas. It is possible to explain the anomalous transport in two-dimensional random flow in terms of the percolation threshold. The percolation approach looks very attractive because it gives simple and, at same time, universal model of the behavior related to the strong correlation effects. In the present paper we concentrate our attention on scaling arguments that play the very important role in estimation of transport effects. We discuss the united approach to obtain the renormalization condition of the small parameter, which is responsible for the analytical description of the system near the percolation threshold. Both monoscale and multiscale models are treated. We consider the steady case, time-dependent perturbations, the influence of drift effects, the percolation transport in a stochastic magnetic field, and compressibility effects.

  5. MESH FREE ESTIMATION OF THE STRUCTURE MODEL INDEX

    Directory of Open Access Journals (Sweden)

    Joachim Ohser

    2011-05-01

    Full Text Available The structure model index (SMI is a means of subsuming the topology of a homogeneous random closed set under just one number, similar to the isoperimetric shape factors used for compact sets. Originally, the SMI is defined as a function of volume fraction, specific surface area and first derivative of the specific surface area, where the derivative is defined and computed using a surface meshing. The generalised Steiner formula yields however a derivative of the specific surface area that is – up to a constant – the density of the integral of mean curvature. Consequently, an SMI can be defined without referring to a discretisation and it can be estimated from 3D image data without need to mesh the surface but using the number of occurrences of 2×2×2 pixel configurations, only. Obviously, it is impossible to completely describe a random closed set by one number. In this paper, Boolean models of balls and infinite straight cylinders serve as cautionary examples pointing out the limitations of the SMI. Nevertheless, shape factors like the SMI can be valuable tools for comparing similar structures. This is illustrated on real microstructures of ice, foams, and paper.

  6. Estimating Spoken Dialog System Quality with User Models

    CERN Document Server

    Engelbrecht, Klaus-Peter

    2013-01-01

    Spoken dialog systems have the potential to offer highly intuitive user interfaces, as they allow systems to be controlled using natural language. However, the complexity inherent in natural language dialogs means that careful testing of the system must be carried out from the very beginning of the design process.   This book examines how user models can be used to support such early evaluations in two ways:  by running simulations of dialogs, and by estimating the quality judgments of users. First, a design environment supporting the creation of dialog flows, the simulation of dialogs, and the analysis of the simulated data is proposed.  How the quality of user simulations may be quantified with respect to their suitability for both formative and summative evaluation is then discussed. The remainder of the book is dedicated to the problem of predicting quality judgments of users based on interaction data. New modeling approaches are presented, which process the dialogs as sequences, and which allow knowl...

  7. Multivariate Logistic Model to estimate Effective Rainfall for an Event

    Science.gov (United States)

    Singh, S. K.; Patil, Sachin; Bárdossy, A.

    2009-04-01

    Multivariate logistic models are widely used in biological, medical, and social sciences but logistic models are seldom applied to hydrological problems. A logistic function behaves linear in the mid range and tends to be non-linear as it approaches to the extremes, hence it is more flexible than a linear function and capable of dealing with skew-distributed variables. They seem to bear good potential to handle asymmetrically distributed hydrological variables of extreme occurrence. In this study, logistic regression approach is implemented to derive a multivariate logistic function for effective rainfall; in the process runoff coefficient is assumed to be a Bernoulli-distributed dependent variable. A backward stepwise logistic regression procedure was performed to derive the logistic transfer function between runoff coefficient and catchment as well as event variables (e.g. drainage density, soil moisture etc). The investigation was carried out using data base for 244 rainfall-runoff events from 42 mesoscale catchments located in south-west Germany. The performance of the derived logistic transfer function was compared with that of SCS method for estimation of effective rainfall.

  8. Modeling and parameter estimation for hydraulic system of excavator's arm

    Institute of Scientific and Technical Information of China (English)

    HE Qing-hua; HAO Peng; ZHANG Da-qing

    2008-01-01

    A retrofitted electro-bydraulic proportional system for hydraulic excavator was introduced firstly. According to the principle and characteristic of load independent flow distribution(LUDV)system, taking boom hydraulic system as an example and ignoring the leakage of hydraulic cylinder and the mass of oil in it,a force equilibrium equation and a continuous equation of hydraulic cylinder were set up.Based On the flow equation of electro-hydraulic proportional valve, the pressure passing through the valve and the difference of pressure were tested and analyzed.The results show that the difference of pressure does not change with load, and it approximates to 2.0 MPa. And then, assume the flow across the valve is directly proportional to spool displacement andis not influenced by load, a simplified model of electro-hydraulic system was put forward. At the same time, by analyzing the structure and load-bearing of boom instrument, and combining moment equivalent equation of manipulator with rotating law, the estimation methods and equations for such parameters as equivalent mass and bearing force of hydraulic cylinder were set up. Finally, the step response of flow of boom cylinder was tested when the electro-hydraulic proportional valve was controlled by the stepcurrent. Based on the experiment curve, the flow gain coefficient of valve is identified as 2.825×10-4m3/(s·A)and the model is verified.

  9. Multi-Variable Model-Based Parameter Estimation Model for Antenna Radiation Pattern Prediction

    Science.gov (United States)

    Deshpande, Manohar D.; Cravey, Robin L.

    2002-01-01

    A new procedure is presented to develop multi-variable model-based parameter estimation (MBPE) model to predict far field intensity of antenna. By performing MBPE model development procedure on a single variable at a time, the present method requires solution of smaller size matrices. The utility of the present method is demonstrated by determining far field intensity due to a dipole antenna over a frequency range of 100-1000 MHz and elevation angle range of 0-90 degrees.

  10. Estimation models of variance components for farrowing interval in swine

    Directory of Open Access Journals (Sweden)

    Aderbal Cavalcante Neto

    2009-02-01

    Full Text Available The main objective of this study was to evaluate the importance of including maternal genetic, common litter environmental and permanent environmental effects in estimation models of variance components for the farrowing interval trait in swine. Data consisting of 1,013 farrowing intervals of Dalland (C-40 sows recorded in two herds were analyzed. Variance components were obtained by the derivative-free restricted maximum likelihood method. Eight models were tested which contained the fixed effects(contemporary group and covariables and the direct genetic additive and residual effects, and varied regarding the inclusion of the maternal genetic, common litter environmental, and/or permanent environmental random effects. The likelihood-ratio test indicated that the inclusion of these effects in the model was unnecessary, but the inclusion of the permanent environmental effect caused changes in the estimates of heritability, which varied from 0.00 to 0.03. In conclusion, the heritability values obtained indicated that this trait appears to present no genetic gain as response to selection. The common litter environmental and the maternal genetic effects did not present any influence on this trait. The permanent environmental effect, however, should be considered in the genetic models for this trait in swine, because its presence caused changes in the additive genetic variance estimates.Este trabalho teve como objetivo principal avaliar a importância da inclusão dos efeitos genético materno, comum de leitegada e de ambiente permanente no modelo de estimação de componentes de variância para a característica intervalo de parto em fêmeas suínas. Foram utilizados dados que consistiam de 1.013 observações de fêmeas Dalland (C-40, registradas em dois rebanhos. As estimativas dos componentes de variância foram realizadas pelo método da máxima verossimilhança restrita livre de derivadas. Foram testados oito modelos, que continham os efeitos

  11. Reliable Estimation of Prediction Uncertainty for Physicochemical Property Models.

    Science.gov (United States)

    Proppe, Jonny; Reiher, Markus

    2017-07-11

    One of the major challenges in computational science is to determine the uncertainty of a virtual measurement, that is the prediction of an observable based on calculations. As highly accurate first-principles calculations are in general unfeasible for most physical systems, one usually resorts to parameteric property models of observables, which require calibration by incorporating reference data. The resulting predictions and their uncertainties are sensitive to systematic errors such as inconsistent reference data, parametric model assumptions, or inadequate computational methods. Here, we discuss the calibration of property models in the light of bootstrapping, a sampling method that can be employed for identifying systematic errors and for reliable estimation of the prediction uncertainty. We apply bootstrapping to assess a linear property model linking the (57)Fe Mössbauer isomer shift to the contact electron density at the iron nucleus for a diverse set of 44 molecular iron compounds. The contact electron density is calculated with 12 density functionals across Jacob's ladder (PWLDA, BP86, BLYP, PW91, PBE, M06-L, TPSS, B3LYP, B3PW91, PBE0, M06, TPSSh). We provide systematic-error diagnostics and reliable, locally resolved uncertainties for isomer-shift predictions. Pure and hybrid density functionals yield average prediction uncertainties of 0.06-0.08 mm s(-1) and 0.04-0.05 mm s(-1), respectively, the latter being close to the average experimental uncertainty of 0.02 mm s(-1). Furthermore, we show that both model parameters and prediction uncertainty depend significantly on the composition and number of reference data points. Accordingly, we suggest that rankings of density functionals based on performance measures (e.g., the squared coefficient of correlation, r(2), or the root-mean-square error, RMSE) should not be inferred from a single data set. This study presents the first statistically rigorous calibration analysis for theoretical M

  12. A Computationally Efficient State Space Approach to Estimating Multilevel Regression Models and Multilevel Confirmatory Factor Models.

    Science.gov (United States)

    Gu, Fei; Preacher, Kristopher J; Wu, Wei; Yung, Yiu-Fai

    2014-01-01

    Although the state space approach for estimating multilevel regression models has been well established for decades in the time series literature, it does not receive much attention from educational and psychological researchers. In this article, we (a) introduce the state space approach for estimating multilevel regression models and (b) extend the state space approach for estimating multilevel factor models. A brief outline of the state space formulation is provided and then state space forms for univariate and multivariate multilevel regression models, and a multilevel confirmatory factor model, are illustrated. The utility of the state space approach is demonstrated with either a simulated or real example for each multilevel model. It is concluded that the results from the state space approach are essentially identical to those from specialized multilevel regression modeling and structural equation modeling software. More importantly, the state space approach offers researchers a computationally more efficient alternative to fit multilevel regression models with a large number of Level 1 units within each Level 2 unit or a large number of observations on each subject in a longitudinal study.

  13. Model-free Estimation of Recent Genetic Relatedness

    Science.gov (United States)

    Conomos, Matthew P.; Reiner, Alexander P.; Weir, Bruce S.; Thornton, Timothy A.

    2016-01-01

    Genealogical inference from genetic data is essential for a variety of applications in human genetics. In genome-wide and sequencing association studies, for example, accurate inference on both recent genetic relatedness, such as family structure, and more distant genetic relatedness, such as population structure, is necessary for protection against spurious associations. Distinguishing familial relatedness from population structure with genotype data, however, is difficult because both manifest as genetic similarity through the sharing of alleles. Existing approaches for inference on recent genetic relatedness have limitations in the presence of population structure, where they either (1) make strong and simplifying assumptions about population structure, which are often untenable, or (2) require correct specification of and appropriate reference population panels for the ancestries in the sample, which might be unknown or not well defined. Here, we propose PC-Relate, a model-free approach for estimating commonly used measures of recent genetic relatedness, such as kinship coefficients and IBD sharing probabilities, in the presence of unspecified structure. PC-Relate uses principal components calculated from genome-screen data to partition genetic correlations among sampled individuals due to the sharing of recent ancestors and more distant common ancestry into two separate components, without requiring specification of the ancestral populations or reference population panels. In simulation studies with population structure, including admixture, we demonstrate that PC-Relate provides accurate estimates of genetic relatedness and improved relationship classification over widely used approaches. We further demonstrate the utility of PC-Relate in applications to three ancestrally diverse samples that vary in both size and genealogical complexity. PMID:26748516

  14. Estimating malaria burden in Nigeria: a geostatistical modelling approach

    Directory of Open Access Journals (Sweden)

    Nnadozie Onyiri

    2015-11-01

    Full Text Available This study has produced a map of malaria prevalence in Nigeria based on available data from the Mapping Malaria Risk in Africa (MARA database, including all malaria prevalence surveys in Nigeria that could be geolocated, as well as data collected during fieldwork in Nigeria between March and June 2007. Logistic regression was fitted to malaria prevalence to identify significant demographic (age and environmental covariates in STATA. The following environmental covariates were included in the spatial model: the normalized difference vegetation index, the enhanced vegetation index, the leaf area index, the land surface temperature for day and night, land use/landcover (LULC, distance to water bodies, and rainfall. The spatial model created suggests that the two main environmental covariates correlating with malaria presence were land surface temperature for day and rainfall. It was also found that malaria prevalence increased with distance to water bodies up to 4 km. The malaria risk map estimated from the spatial model shows that malaria prevalence in Nigeria varies from 20% in certain areas to 70% in others. The highest prevalence rates were found in the Niger Delta states of Rivers and Bayelsa, the areas surrounding the confluence of the rivers Niger and Benue, and also isolated parts of the north-eastern and north-western parts of the country. Isolated patches of low malaria prevalence were found to be scattered around the country with northern Nigeria having more such areas than the rest of the country. Nigeria’s belt of middle regions generally has malaria prevalence of 40% and above.

  15. On asymptotics of t-type regression estimation in multiple linear model

    Institute of Scientific and Technical Information of China (English)

    2004-01-01

    We consider a robust estimator (t-type regression estimator) of multiple linear regression model by maximizing marginal likelihood of a scaled t-type error t-distribution.The marginal likelihood can also be applied to the de-correlated response when the withinsubject correlation can be consistently estimated from an initial estimate of the model based on the independent working assumption. This paper shows that such a t-type estimator is consistent.

  16. Parameter Estimation of Jelinski-Moranda Model Based on Weighted Nonlinear Least Squares and Heteroscedasticity

    OpenAIRE

    Liu, Jingwei; Liu, Yi; Xu, Meizhi

    2015-01-01

    Parameter estimation method of Jelinski-Moranda (JM) model based on weighted nonlinear least squares (WNLS) is proposed. The formulae of resolving the parameter WNLS estimation (WNLSE) are derived, and the empirical weight function and heteroscedasticity problem are discussed. The effects of optimization parameter estimation selection based on maximum likelihood estimation (MLE) method, least squares estimation (LSE) method and weighted nonlinear least squares estimation (WNLSE) method are al...

  17. The rigid-flexible robotic manipulator: Nonlinear control and state estimation considering a different mathematical model for estimation

    Science.gov (United States)

    Fenili, André

    2012-11-01

    In this paper the author investigates the angular position and vibration control of a nonlinear rigid-flexible two link robotic manipulator considering fast angular maneuvers. The nonlinear control technique named State-Dependent Riccati Equation (SDRE) is used here to achieve these aims. In a more realistic approach, it is considered that some states can be measured and some states cannot be measured. The states not measured are estimated in order to be used for the SDRE control. These states are all the angular velocities and the velocity of deformation of the flexible link. A state-dependent Riccati equation-based estimator is used here. Not only different initial conditions between the system to be controlled (here named "real" system) and the estimator but also a different mathematical model is considered as the estimation model in order to verify the limitations of the proposed estimation and control techniques. The mathematical model that emulates the real system to be controlled considers two modes expansion and the estimation model considers only one mode expansion. The results for the different approaches are compared and discussed.

  18. Multiview Visibility Estimation for Image-Based Modeling

    Institute of Scientific and Technical Information of China (English)

    Liu-Xin Zhang; Ming-Tao Pei; Yun-De Jia

    2011-01-01

    In this paper,we investigate the problem of determining regions in 3D scene visible to some given viewpoints when obstacles are present in the scene.We assume that the obstacles are composed of some opaque objects with closed surfaces.The problem is formulated in an implicit framework where the obstacles are represented by a level set function.The visible and invisible regions of the given viewpoints are determined through an efficient implicit ray tracing technique.As an extension of our approach,we apply the multiview visibility estimation to an image-based modeling technique.The unknown scene geometry and multiview visibility information are incorporated into a variational energy functional.By minimizing the energy functional,the true scene geometry as well as the accurate visibility information of the multiple views can be recovered from a number of scene images.This makes it feasible to handle the visibility problem of multiple views by our approach when the true scene geometry is unknown.

  19. Model-based pattern speed estimates for 38 barred galaxies

    CERN Document Server

    Rautiainen, P; Laurikainen, E

    2008-01-01

    We have modelled 38 barred galaxies by using near-IR and optical data from the Ohio State University Bright Spiral Galaxy Survey. We constructed the gravitational potentials of the galaxies from $H$-band photometry, assuming constant mass-to-light ratio. The halo component we chose corresponds to the so called universal rotation curve. In each case, we used the response of gaseous and stellar particle disc to rigidly rotating potential to determine the pattern speed. We find that the pattern speed of the bar depends roughly on the morphological type. The average value of corotation resonance radius to bar radius, $\\mathcal{R}$, increases from $1.15 \\pm 0.25$ in types SB0/a -- SBab to $1.44 \\pm 0.29$ in SBb and $1.82\\pm 0.63$ in SBbc -- SBc. Within the error estimates for the pattern speed and bar radius, all galaxies of type SBab or earlier have a fast bar ($\\mathcal{R} \\le 1.4$), whereas the bars in later type galaxies include both fast and slow rotators. Of 16 later type galaxies with a nominal value of $\\m...

  20. Estimating seabed scattering mechanisms via Bayesian model selection.

    Science.gov (United States)

    Steininger, Gavin; Dosso, Stan E; Holland, Charles W; Dettmer, Jan

    2014-10-01

    A quantitative inversion procedure is developed and applied to determine the dominant scattering mechanism (surface roughness and/or volume scattering) from seabed scattering-strength data. The classification system is based on trans-dimensional Bayesian inversion with the deviance information criterion used to select the dominant scattering mechanism. Scattering is modeled using first-order perturbation theory as due to one of three mechanisms: Interface scattering from a rough seafloor, volume scattering from a heterogeneous sediment layer, or mixed scattering combining both interface and volume scattering. The classification system is applied to six simulated test cases where it correctly identifies the true dominant scattering mechanism as having greater support from the data in five cases; the remaining case is indecisive. The approach is also applied to measured backscatter-strength data where volume scattering is determined as the dominant scattering mechanism. Comparison of inversion results with core data indicates the method yields both a reasonable volume heterogeneity size distribution and a good estimate of the sub-bottom depths at which scatterers occur.

  1. Reference Models for Structural Technology Assessment and Weight Estimation

    Science.gov (United States)

    Cerro, Jeff; Martinovic, Zoran; Eldred, Lloyd

    2005-01-01

    Previously the Exploration Concepts Branch of NASA Langley Research Center has developed techniques for automating the preliminary design level of launch vehicle airframe structural analysis for purposes of enhancing historical regression based mass estimating relationships. This past work was useful and greatly reduced design time, however its application area was very narrow in terms of being able to handle a large variety in structural and vehicle general arrangement alternatives. Implementation of the analysis approach presented herein also incorporates some newly developed computer programs. Loft is a program developed to create analysis meshes and simultaneously define structural element design regions. A simple component defining ASCII file is read by Loft to begin the design process. HSLoad is a Visual Basic implementation of the HyperSizer Application Programming Interface, which automates the structural element design process. Details of these two programs and their use are explained in this paper. A feature which falls naturally out of the above analysis paradigm is the concept of "reference models". The flexibility of the FEA based JAVA processing procedures and associated process control classes coupled with the general utility of Loft and HSLoad make it possible to create generic program template files for analysis of components ranging from something as simple as a stiffened flat panel, to curved panels, fuselage and cryogenic tank components, flight control surfaces, wings, through full air and space vehicle general arrangements.

  2. Bayes Estimation for Inverse Rayleigh Model under Different Loss Functions

    Directory of Open Access Journals (Sweden)

    Guobing Fan

    2015-04-01

    Full Text Available The inverse Rayleigh distribution plays an important role in life test and reliability domain. The aim of this article is study the Bayes estimation of parameter of inverse Rayleigh distribution. Bayes estimators are obtained under squared error loss, LINEX loss and entropy loss functions on the basis of quasi-prior distribution. Comparisons in terms of risks with the estimators of parameter under three loss functions are also studied. Finally, a numerical example is used to illustrate the results.

  3. Comparing Three Estimation Methods for the Three-Parameter Logistic IRT Model

    Science.gov (United States)

    Lamsal, Sunil

    2015-01-01

    Different estimation procedures have been developed for the unidimensional three-parameter item response theory (IRT) model. These techniques include the marginal maximum likelihood estimation, the fully Bayesian estimation using Markov chain Monte Carlo simulation techniques, and the Metropolis-Hastings Robbin-Monro estimation. With each…

  4. THE SUPERIORITY OF EMPIRICAL BAYES ESTIMATION OF PARAMETERS IN PARTITIONED NORMAL LINEAR MODEL

    Institute of Scientific and Technical Information of China (English)

    Zhang Weiping; Wei Laisheng

    2008-01-01

    In this article, the empirical Bayes (EB) estimators are constructed for the estimable functions of the parameters in partitioned normal linear model. The superiorities of the EB estimators over ordinary least-squares (LS) estimator are investigated under mean square error matrix (MSEM) criterion.

  5. Efficient Quantile Estimation for Functional-Coefficient Partially Linear Regression Models

    Institute of Scientific and Technical Information of China (English)

    Zhangong ZHOU; Rong JIANG; Weimin QIAN

    2011-01-01

    The quantile estimation methods are proposed for functional-coefficient partially linear regression (FCPLR) model by combining nonparametric and functional-coefficient regression (FCR) model.The local linear scheme and the integrated method are used to obtain local quantile estimators of all unknown functions in the FCPLR model.These resulting estimators are asymptotically normal,but each of them has big variance.To reduce variances of these quantile estimators,the one-step backfitting technique is used to obtain the efficient quantile estimators of all unknown functions,and their asymptotic normalities are derived.Two simulated examples are carried out to illustrate the proposed estimation methodology.

  6. A Stochastic Restricted Principal Components Regression Estimator in the Linear Model

    Directory of Open Access Journals (Sweden)

    Daojiang He

    2014-01-01

    Full Text Available We propose a new estimator to combat the multicollinearity in the linear model when there are stochastic linear restrictions on the regression coefficients. The new estimator is constructed by combining the ordinary mixed estimator (OME and the principal components regression (PCR estimator, which is called the stochastic restricted principal components (SRPC regression estimator. Necessary and sufficient conditions for the superiority of the SRPC estimator over the OME and the PCR estimator are derived in the sense of the mean squared error matrix criterion. Finally, we give a numerical example and a Monte Carlo study to illustrate the performance of the proposed estimator.

  7. A New Estimation Model of IC Interconnect Lifetime Based on Uniform Defect Distribution Model

    Institute of Scientific and Technical Information of China (English)

    ZHAOTianxu; DUANXuchao; HAOYue; MAPeijun

    2004-01-01

    Defect, which exists throughout IC manufacturing process, is one of the important factors affecting IC interconnection lifetime. In this paper, a new failure model of IC interconnection is proposed based on analysis of the awdlable lifetime estimation models of IC interconnect lifetime. Many factors, such as the sizes of the defect, wire width, wire length and so on, are considered in this new model. The simulation results show that defect has a great influence on IC's interconnect lifetime, and the larger the defect size, the greater the influence. The new model can be used in an IC design to estimate electromigration loss related to the IC missing material defect and to some other factors.

  8. Conditional Likelihood Estimators for Hidden Markov Models and Stochastic Volatility Models

    OpenAIRE

    Genon-Catalot, Valentine; Jeantheau, Thierry; Laredo, Catherine

    2003-01-01

    ABSTRACT. This paper develops a new contrast process for parametric inference of general hidden Markov models, when the hidden chain has a non-compact state space. This contrast is based on the conditional likelihood approach, often used for ARCH-type models. We prove the strong consistency of the conditional likelihood estimators under appropriate conditions. The method is applied to the Kalman filter (for which this contrast and the exact likelihood lead to asymptotically equivalent estimat...

  9. Monte Carlo estimation of the conditional Rasch model

    NARCIS (Netherlands)

    Akkermans, Wies M.W.

    1994-01-01

    In order to obtain conditional maximum likelihood estimates, the so-called conditioning estimates have to be calculated. In this paper a method is examined that does not calculate these constants exactly, but approximates them using Monte Carlo Markov Chains. As an example, the method is applied to

  10. Holding Period Return-Risk Modeling: Ambiguity in Estimation

    NARCIS (Netherlands)

    W.G.P.M. Hallerbach (Winfried)

    2003-01-01

    textabstractIn this paper we explore the theoretical and empirical problems of estimating average (excess) return and risk of US equities over various holding periods and sample periods. Our findings are relevant for performance evaluation, for estimating the historical equity risk premium, and for

  11. Holding Period Return-Risk Modeling: Ambiguity in Estimation

    NARCIS (Netherlands)

    W.G.P.M. Hallerbach (Winfried)

    2003-01-01

    textabstractIn this paper we explore the theoretical and empirical problems of estimating average (excess) return and risk of US equities over various holding periods and sample periods. Our findings are relevant for performance evaluation, for estimating the historical equity risk premium, and for

  12. Hierarchical Linear Modeling with Maximum Likelihood, Restricted Maximum Likelihood, and Fully Bayesian Estimation

    Science.gov (United States)

    Boedeker, Peter

    2017-01-01

    Hierarchical linear modeling (HLM) is a useful tool when analyzing data collected from groups. There are many decisions to be made when constructing and estimating a model in HLM including which estimation technique to use. Three of the estimation techniques available when analyzing data with HLM are maximum likelihood, restricted maximum…

  13. Application of Parameter Estimation for Diffusions and Mixture Models

    DEFF Research Database (Denmark)

    Nolsøe, Kim

    with the posterior score function. From an application point of view this methology is easy to apply, since the optimal estimating function G(;Xt1 ; : : : ;Xtn ) is equal to the classical optimal estimating function, plus a correction term which takes into account the prior information. The methology is particularly...... useful in situations where prior information is available and only few observations are present. The resulting estimators in some sense have better properties than the classical estimators. The second idea is to formulate Michael Sørensens method "prediction based estimating function" for measurement...... from a posterior distribution. The sampling algorithm is constructed from a Markov chain which allows the dimension of each sample to vary, this is obtained by utilizing the Reversible jumps methology proposed by Peter Green. Each sample is constructed such that the corresponding structures...

  14. The Spatial Fay-Herriot Model in Poverty Estimation

    Directory of Open Access Journals (Sweden)

    Wawrowski Łukasz

    2016-12-01

    Full Text Available Counteracting poverty is one of the objectives of the European Commission clearly emphasized in the Europe 2020 strategy. Conducting appropriate social policy requires knowledge of the extent of this phenomenon. Such information is provided through surveys on living conditions conducted by, among others, the Central Statistical Office (CSO. Nevertheless, the sample size in these surveys allows for a precise estimation of poverty rate only at a very general level - the whole country and regions. Small sample size at the lower level of spatial aggregation results in a large variance of obtained estimates and hence lower reliability. To obtain information in sparsely represented territorial sections, methods of small area estimation are used. Through using the information from other sources, such as censuses and administrative registers, it is possible to estimate distribution parameters with smaller variance than in the case of direct estimation.

  15. Quasi-likelihood estimation of average treatment effects based on model information

    Institute of Scientific and Technical Information of China (English)

    Zhi-hua SUN

    2007-01-01

    In this paper, the estimation of average treatment effects is considered when we have the model information of the conditional mean and conditional variance for the responses given the covariates. The quasi-likelihood method adapted to treatment effects data is developed to estimate the parameters in the conditional mean and conditional variance models. Based on the model information, we define three estimators by imputation, regression and inverse probability weighted methods.All the estimators are shown asymptotically normal. Our simulation results show that by using the model information, the substantial efficiency gains are obtained which are comparable with the existing estimators.

  16. Quasi-likelihood estimation of average treatment effects based on model information

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    In this paper, the estimation of average treatment effects is considered when we have the model information of the conditional mean and conditional variance for the responses given the covariates. The quasi-likelihood method adapted to treatment effects data is developed to estimate the parameters in the conditional mean and conditional variance models. Based on the model information, we define three estimators by imputation, regression and inverse probability weighted methods. All the estimators are shown asymptotically normal. Our simulation results show that by using the model information, the substantial efficiency gains are obtained which are comparable with the existing estimators.

  17. Parametrically guided estimation in nonparametric varying coefficient models with quasi-likelihood.

    Science.gov (United States)

    Davenport, Clemontina A; Maity, Arnab; Wu, Yichao

    2015-04-01

    Varying coefficient models allow us to generalize standard linear regression models to incorporate complex covariate effects by modeling the regression coefficients as functions of another covariate. For nonparametric varying coefficients, we can borrow the idea of parametrically guided estimation to improve asymptotic bias. In this paper, we develop a guided estimation procedure for the nonparametric varying coefficient models. Asymptotic properties are established for the guided estimators and a method of bandwidth selection via bias-variance tradeoff is proposed. We compare the performance of the guided estimator with that of the unguided estimator via both simulation and real data examples.

  18. Bayesian estimation of regularization parameters for deformable surface models

    Energy Technology Data Exchange (ETDEWEB)

    Cunningham, G.S.; Lehovich, A.; Hanson, K.M.

    1999-02-20

    In this article the authors build on their past attempts to reconstruct a 3D, time-varying bolus of radiotracer from first-pass data obtained by the dynamic SPECT imager, FASTSPECT, built by the University of Arizona. The object imaged is a CardioWest total artificial heart. The bolus is entirely contained in one ventricle and its associated inlet and outlet tubes. The model for the radiotracer distribution at a given time is a closed surface parameterized by 482 vertices that are connected to make 960 triangles, with nonuniform intensity variations of radiotracer allowed inside the surface on a voxel-to-voxel basis. The total curvature of the surface is minimized through the use of a weighted prior in the Bayesian framework, as is the weighted norm of the gradient of the voxellated grid. MAP estimates for the vertices, interior intensity voxels and background count level are produced. The strength of the priors, or hyperparameters, are determined by maximizing the probability of the data given the hyperparameters, called the evidence. The evidence is calculated by first assuming that the posterior is approximately normal in the values of the vertices and voxels, and then by evaluating the integral of the multi-dimensional normal distribution. This integral (which requires evaluating the determinant of a covariance matrix) is computed by applying a recent algorithm from Bai et. al. that calculates the needed determinant efficiently. They demonstrate that the radiotracer is highly inhomogeneous in early time frames, as suspected in earlier reconstruction attempts that assumed a uniform intensity of radiotracer within the closed surface, and that the optimal choice of hyperparameters is substantially different for different time frames.

  19. Parameter estimation for the subcritical Heston model based on discrete time observations

    OpenAIRE

    2014-01-01

    We study asymptotic properties of some (essentially conditional least squares) parameter estimators for the subcritical Heston model based on discrete time observations derived from conditional least squares estimators of some modified parameters.

  20. A Note on Parameter Estimations of Panel Vector Autoregressive Models with Intercorrelation

    Institute of Scientific and Technical Information of China (English)

    Jian-hong Wu; Li-xing Zhu; Zai-xing Li

    2009-01-01

    This note considers parameter estimation for panel vector autoregressive models with intercorrela-tion. Conditional least squares estimators are derived and the asymptotic normality is established. A simulation is carried out for illustration.

  1. Estimation of pyrethroid pesticide intake using regression modeling of food groups based on composite dietary samples

    Science.gov (United States)

    Population-based estimates of pesticide intake are needed to characterize exposure for particular demographic groups based on their dietary behaviors. Regression modeling performed on measurements of selected pesticides in composited duplicate diet samples allowed (1) estimation ...

  2. truncSP: An R Package for Estimation of Semi-Parametric Truncated Linear Regression Models

    Directory of Open Access Journals (Sweden)

    Maria Karlsson

    2014-05-01

    Full Text Available Problems with truncated data occur in many areas, complicating estimation and inference. Regarding linear regression models, the ordinary least squares estimator is inconsistent and biased for these types of data and is therefore unsuitable for use. Alternative estimators, designed for the estimation of truncated regression models, have been developed. This paper presents the R package truncSP. The package contains functions for the estimation of semi-parametric truncated linear regression models using three different estimators: the symmetrically trimmed least squares, quadratic mode, and left truncated estimators, all of which have been shown to have good asymptotic and ?nite sample properties. The package also provides functions for the analysis of the estimated models. Data from the environmental sciences are used to illustrate the functions in the package.

  3. Parameter and State Estimator for State Space Models

    Directory of Open Access Journals (Sweden)

    Ruifeng Ding

    2014-01-01

    Full Text Available This paper proposes a parameter and state estimator for canonical state space systems from measured input-output data. The key is to solve the system state from the state equation and to substitute it into the output equation, eliminating the state variables, and the resulting equation contains only the system inputs and outputs, and to derive a least squares parameter identification algorithm. Furthermore, the system states are computed from the estimated parameters and the input-output data. Convergence analysis using the martingale convergence theorem indicates that the parameter estimates converge to their true values. Finally, an illustrative example is provided to show that the proposed algorithm is effective.

  4. Parameter and state estimator for state space models.

    Science.gov (United States)

    Ding, Ruifeng; Zhuang, Linfan

    2014-01-01

    This paper proposes a parameter and state estimator for canonical state space systems from measured input-output data. The key is to solve the system state from the state equation and to substitute it into the output equation, eliminating the state variables, and the resulting equation contains only the system inputs and outputs, and to derive a least squares parameter identification algorithm. Furthermore, the system states are computed from the estimated parameters and the input-output data. Convergence analysis using the martingale convergence theorem indicates that the parameter estimates converge to their true values. Finally, an illustrative example is provided to show that the proposed algorithm is effective.

  5. Consistent Fundamental Matrix Estimation in a Quadratic Measurement Error Model Arising in Motion Analysis

    OpenAIRE

    Kukush, A.; Markovsky, I.; Van Huffel, S.

    2002-01-01

    Consistent estimators of the rank-deficient fundamental matrix yielding information on the relative orientation of two images in two-view motion analysis are derived. The estimators are derived by minimizing a corrected contrast function in a quadratic measurement error model. In addition, a consistent estimator for the measurement error variance is obtained. Simulation results show the improved accuracy of the newly proposed estimator compared to the ordinary total least-squares estimator.

  6. Estimation of Regurgitant Volume and Orifice in Aortic Regurgitation Combining CW Doppler and Parameter Estimation in a Windkessel Like Model

    Directory of Open Access Journals (Sweden)

    Bjørn A.J. Angelsen

    1991-01-01

    Full Text Available A method for noninvasive estimation of regurgitant orifice and volume in aortic regurgitation is proposed and tested in anaesthesized open chested pigs. The method can be used with noninvasive measurement of regurgitant jet velocity with continuous wave ultrasound Doppler measurements together with cuff measurements of systolic and diastolic systemic pressure in the arm. These measurements are then used for parameter estimation in a Windkessel-like model which include the regurgitant orifice as a parameter. The aortic volume compliance and the peripheral resistance are also included as parameters and estimated in the same process. For the test of the method, invasive measurements in the open chest pigs are used. Electromagnetic flow measurements in the ascending aorta and pulmonary artery are used for control, and a correlation between regurgitant volume obtained from parameter estimation and electromagnetic flow measurements of 0.95 over a range from 2.1 to 17.8 mL is obtained.

  7. E-model MOS Estimate Improvement through Jitter Buffer Packet Loss Modelling

    Directory of Open Access Journals (Sweden)

    Adrian Kovac

    2011-01-01

    Full Text Available Proposed article analyses dependence of MOS as a voice call quality (QoS measure estimated through ITU-T E-model under real network conditions with jitter. In this paper, a method of jitter effect is proposed. Jitter as voice packet time uncertainty appears as increased packet loss caused by jitter memory buffer under- or overflow. Jitter buffer behaviour at receiver’s side is modelled as Pareto/D/1/K system with Pareto-distributed packet interarrival times and its performance is experimentally evaluated by using statistic tools. Jitter buffer stochastic model is then incorporated into E-model in an additive manner accounting for network jitter effects via excess packet loss complementing measured network packet loss. Proposed modification of E-model input parameter adds two degrees of freedom in modelling: network jitter and jitter buffer size.

  8. Sensitivity to Estimation Errors in Mean-variance Models

    Institute of Scientific and Technical Information of China (English)

    Zhi-ping Chen; Cai-e Zhao

    2003-01-01

    In order to give a complete and accurate description about the sensitivity of efficient portfolios to changes in assets' expected returns, variances and covariances, the joint effect of estimation errors in means, variances and covariances on the efficient portfolio's weights is investigated in this paper. It is proved that the efficient portfolio's composition is a Lipschitz continuous, differentiable mapping of these parameters under suitable conditions. The change rate of the efficient portfolio's weights with respect to variations about riskreturn estimations is derived by estimating the Lipschitz constant. Our general quantitative results show thatthe efficient portfolio's weights are normally not so sensitive to estimation errors about means and variances .Moreover, we point out those extreme cases which might cause stability problems and how to avoid them in practice. Preliminary numerical results are also provided as an illustration to our theoretical results.

  9. EFFICIENT ESTIMATION OF FUNCTIONAL-COEFFICIENT REGRESSION MODELS WITH DIFFERENT SMOOTHING VARIABLES

    Institute of Scientific and Technical Information of China (English)

    Zhang Riquan; Li Guoying

    2008-01-01

    In this article, a procedure for estimating the coefficient functions on the functional-coefficient regression models with different smoothing variables in different co-efficient functions is defined. First step, by the local linear technique and the averaged method, the initial estimates of the coefficient functions are given. Second step, based on the initial estimates, the efficient estimates of the coefficient functions are proposed by a one-step back-fitting procedure. The efficient estimators share the same asymptotic normalities as the local linear estimators for the functional-coefficient models with a single smoothing variable in different functions. Two simulated examples show that the procedure is effective.

  10. A new approach for estimating the efficiencies of the nucleotide substitution models.

    Science.gov (United States)

    Som, Anup

    2007-04-01

    In this article, a new approach is presented for estimating the efficiencies of the nucleotide substitution models in a four-taxon case and then this approach is used to estimate the relative efficiencies of six substitution models under a wide variety of conditions. In this approach, efficiencies of the models are estimated by using a simple probability distribution theory. To assess the accuracy of the new approach, efficiencies of the models are also estimated by using the direct estimation method. Simulation results from the direct estimation method confirmed that the new approach is highly accurate. The success of the new approach opens a unique opportunity to develop analytical methods for estimating the relative efficiencies of the substitution models in a straightforward way.

  11. Specification, Estimation and Evaluation of Vector Smooth Transition Autoregressive Models with Applications

    OpenAIRE

    Teräsvirta, Timo; Yang, Yukai

    2014-01-01

    We consider a nonlinear vector model called the logistic vector smooth transition autoregressive model. The bivariate single-transition vector smooth transition regression model of Camacho (2004) is generalised to a multivariate and multitransition one. A modelling strategy consisting of specification, including testing linearity, estimation and evaluation of these models is constructed. Nonlinear least squares estimation of the parameters of the model is discussed. Evaluation by misspecifica...

  12. Evaluation of black carbon estimations in global aerosol models

    NARCIS (Netherlands)

    Koch, D.; Schulz, M.; McNaughton, C.; Spackman, J.R.; Balkanski, Y.; Bauer, S.; Krol, M.C.

    2009-01-01

    We evaluate black carbon (BC) model predictions from the AeroCom model intercomparison project by considering the diversity among year 2000 model simulations and comparing model predictions with available measurements. These model-measurement intercomparisons include BC surface and aircraft concentr

  13. Aids to determining fuel models for estimating fire behavior

    Science.gov (United States)

    Hal E. Anderson

    1982-01-01

    Presents photographs of wildland vegetation appropriate for the 13 fuel models used in mathematical models of fire behavior. Fuel model descriptions include fire behavior associated with each fuel and its physical characteristics. A similarity chart cross-references the 13 fire behavior fuel models to the 20 fuel models used in the National Fire Danger Rating System....

  14. Methodology for the Model-based Small Area Estimates of Cancer Risk Factors and Screening Behaviors - Small Area Estimates

    Science.gov (United States)

    This model-based approach uses data from both the Behavioral Risk Factor Surveillance System (BRFSS) and the National Health Interview Survey (NHIS) to produce estimates of the prevalence rates of cancer risk factors and screening behaviors at the state, health service area, and county levels.

  15. Los Alamos Waste Management Cost Estimation Model; Final report: Documentation of waste management process, development of Cost Estimation Model, and model reference manual

    Energy Technology Data Exchange (ETDEWEB)

    Matysiak, L.M.; Burns, M.L.

    1994-03-01

    This final report completes the Los Alamos Waste Management Cost Estimation Project, and includes the documentation of the waste management processes at Los Alamos National Laboratory (LANL) for hazardous, mixed, low-level radioactive solid and transuranic waste, development of the cost estimation model and a user reference manual. The ultimate goal of this effort was to develop an estimate of the life cycle costs for the aforementioned waste types. The Cost Estimation Model is a tool that can be used to calculate the costs of waste management at LANL for the aforementioned waste types, under several different scenarios. Each waste category at LANL is managed in a separate fashion, according to Department of Energy requirements and state and federal regulations. The cost of the waste management process for each waste category has not previously been well documented. In particular, the costs associated with the handling, treatment and storage of the waste have not been well understood. It is anticipated that greater knowledge of these costs will encourage waste generators at the Laboratory to apply waste minimization techniques to current operations. Expected benefits of waste minimization are a reduction in waste volume, decrease in liability and lower waste management costs.

  16. A Family of Computationally Efficient and Simple Estimators for Unnormalized Statistical Models

    CERN Document Server

    Pihlaja, Miika; Hyvarinen, Aapo

    2012-01-01

    We introduce a new family of estimators for unnormalized statistical models. Our family of estimators is parameterized by two nonlinear functions and uses a single sample from an auxiliary distribution, generalizing Maximum Likelihood Monte Carlo estimation of Geyer and Thompson (1992). The family is such that we can estimate the partition function like any other parameter in the model. The estimation is done by optimizing an algebraically simple, well defined objective function, which allows for the use of dedicated optimization methods. We establish consistency of the estimator family and give an expression for the asymptotic covariance matrix, which enables us to further analyze the influence of the nonlinearities and the auxiliary density on estimation performance. Some estimators in our family are particularly stable for a wide range of auxiliary densities. Interestingly, a specific choice of the nonlinearity establishes a connection between density estimation and classification by nonlinear logistic reg...

  17. Bootstrap-estimated land-to-water coefficients from the CBTN_v4 SPARROW model

    Science.gov (United States)

    Ator, Scott; Brakebill, John W.; Blomquist, Joel D.

    2017-01-01

    This file contains 200 sets of bootstrap-estimated land-to-water coefficients from the CBTN_v4 SPARROW model, which is documented in USGS Scientific Investigations Report 2011-5167. The coefficients were produced as part of CBTN_v4 model calibration to provide information about the uncertainty in model estimates.

  18. A novel method to estimate model uncertainty using machine learning techniques

    NARCIS (Netherlands)

    Solomatine, D.P.; Lal Shrestha, D.

    2009-01-01

    A novel method is presented for model uncertainty estimation using machine learning techniques and its application in rainfall runoff modeling. In this method, first, the probability distribution of the model error is estimated separately for different hydrological situations and second, the

  19. Estimation and asymptotic theory for transition probabilities in Markov Renewal Multi–state models

    NARCIS (Netherlands)

    Spitoni, C.; Verduijn, M.; Putter, H.

    2012-01-01

    In this paper we discuss estimation of transition probabilities for semi–Markov multi–state models. Non–parametric and semi–parametric estimators of the transition probabilities for a large class of models (forward going models) are proposed. Large sample theory is derived using the functional delta

  20. Non-gaussian Test Models for Prediction and State Estimation with Model Errors

    Institute of Scientific and Technical Information of China (English)

    Michal BRANICKI; Nan CHEN; Andrew J.MAJDA

    2013-01-01

    Turbulent dynamical systems involve dynamics with both a large dimensional phase space and a large number of positive Lyapunov exponents.Such systems are ubiquitous in applications in contemporary science and engineering where the statistical ensemble prediction and the real time filtering/state estimation are needed despite the underlying complexity of the system.Statistically exactly solvable test models have a crucial role to provide firm mathematical underpinning or new algorithms for vastly more complex scientific phenomena.Here,a class of statistically exactly solvable non-Gaussian test models is introduced,where a generalized Feynman-Kac formulation reduces the exact behavior of conditional statistical moments to the solution to inhomogeneous Fokker-Planck equations modified by linear lower order coupling and source terms.This procedure is applied to a test model with hidden instabilities and is combined with information theory to address two important issues in the contemporary statistical prediction of turbulent dynamical systems:the coarse-gained ensemble prediction in a perfect model and the improving long range forecasting in imperfect models.The models discussed here should be useful for many other applications and algorithms for the real time prediction and the state estimation.

  1. Unscented Kalman filter with parameter identifiability analysis for the estimation of multiple parameters in kinetic models

    OpenAIRE

    Baker Syed; Poskar C; Junker Björn

    2011-01-01

    Abstract In systems biology, experimentally measured parameters are not always available, necessitating the use of computationally based parameter estimation. In order to rely on estimated parameters, it is critical to first determine which parameters can be estimated for a given model and measurement set. This is done with parameter identifiability analysis. A kinetic model of the sucrose accumulation in the sugar cane culm tissue developed by Rohwer et al. was taken as a test case model. Wh...

  2. EMG-to-force estimation with full-scale physiology based muscle model

    OpenAIRE

    Hayashibe, Mitsuhiro; Guiraud, David; Poignet, Philippe

    2009-01-01

    International audience; EMG-to-force estimation for voluntary muscle contraction has many applications in human-machine interaction, motion analysis, and rehabilitation robotics for prosthetic limbs or exoskeletons. EMG-based model can account for a subject's individual activation patterns to estimate muscle force. For the estimation, so-called Hill-type model has been used in most of the cases. It already has shown its promising performance, but it is still known as a phenomenological model ...

  3. A Robbins-Monro procedure for estimation in semiparametric regression models

    CERN Document Server

    Bercu, Bernard

    2011-01-01

    This paper is devoted to the parametric estimation of a shift together with the nonparametric estimation of a regression function in a semiparametric regression model. We implement a Robbins-Monro procedure very efficient and easy to handle. On the one hand, we propose a stochastic algorithm similar to that of Robbins-Monro in order to estimate the shift parameter. A preliminary evaluation of the regression function is not necessary for estimating the shift parameter. On the other hand, we make use of a recursive Nadaraya-Watson estimator for the estimation of the regression function. This kernel estimator takes in account the previous estimation of the shift parameter. We establish the almost sure convergence for both Robbins-Monro and Nadaraya-Watson estimators. The asymptotic normality of our estimates is also provided.

  4. A Modified Weighted Symmetric Estimator for a Gaussian First-order Autoregressive Model with Additive Outliers

    Directory of Open Access Journals (Sweden)

    Wararit PANICHKITKOSOLKUL

    2012-09-01

    Full Text Available Guttman and Tiao [1], and Chang [2] showed that the effect of outliers may cause serious bias in estimating autocorrelations, partial correlations, and autoregressive moving average parameters (cited in Chang et al. [3]. This paper presents a modified weighted symmetric estimator for a Gaussian first-order autoregressive AR(1 model with additive outliers. We apply the recursive median adjustment based on an exponentially weighted moving average (EWMA to the weighted symmetric estimator of Park and Fuller [4]. We consider the following estimators: the weighted symmetric estimator (, the recursive mean adjusted weighted symmetric estimator ( proposed by Niwitpong [5], the recursive median adjusted weighted symmetric estimator ( proposed by Panichkitkosolkul [6], and the weighted symmetric estimator using adjusted recursive median based on EWMA (. Using Monte Carlo simulations, we compare the mean square error (MSE of estimators. Simulation results have shown that the proposed estimator, , provides a MSE lower than those of , and  for almost all situations.

  5. Over-sampling basis expansion model aided channel estimation for OFDM systems with ICI

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    The rapid variation of channel can induce the intercarrier interference in orthogonal frequency-division multiplexing (OFDM) systems. Intercarrier interference will significantly increase the difficulty of OFDM channel estimation because too many channel coefficients need be estimated. In this article, a novel channel estimator is proposed to resolve the above problem. This estimator consists of two parts: the channel parameter estimation unit (CPEU), which is used to estimate the number of channel taps and the multipath time delays, and the channel coefficient estimation unit (CCEU), which is used to estimate the channel coefficients by using the estimated channel parameters provided by CPEU. In CCEU, the over-sampling basis expansion model is resorted to solve the problem that a large number of channel coefficients need to be estimated. Finally, simulation results are given to scale the performance of the proposed scheme.

  6. Wegner estimate for discrete alloy-type models

    CERN Document Server

    Veselić, Ivan

    2010-01-01

    We study discrete alloy-type random Schr\\"odinger operators on $\\ell^2(\\mathbb{Z}^d)$. Wegner estimates are bounds on the average number of eigenvalues in an energy interval of finite box restrictions of these types of operators. If the single site potential is compactly supported and the distribution of the coupling constant is of bounded variation a Wegner estimate holds. The bound is polynomial in the volume of the box and thus applicable as an ingredient for a localisation proof via multiscale analysis.

  7. Estimation of IT energy budget during the St. Patrick's Day storm 2015: observations, modeling and challenges.

    Science.gov (United States)

    Verkhoglyadova, O. P.; Meng, X.; Mannucci, A. J.; Mlynczak, M. G.; Hunt, L. A.; Tsurutani, B.

    2015-12-01

    We present estimates for the energy budget of the 2015 St. Patrick's Day storm. Empirical models and coupling functions are used as proxies for energy input due to solar wind-magnetosphere coupling. Fluxes of thermospheric nitric oxide and carbon dioxide cooling emissions are estimated in several latitude ranges. Solar wind data and the Weimer 2005 model for high-latitude electrodynamics are used to drive GITM modeling for the storm. Model estimations for energy partitioning, Joule heating, NO cooling are compared with observations and empirical proxies. We outline challenges in the estimation of the IT energy budget (Joule heating, Poynting flux, particle precipitation) during geomagnetic storms.

  8. Evolving Software Effort Estimation Models Using Multigene Symbolic Regression Genetic Programming

    Directory of Open Access Journals (Sweden)

    Sultan Aljahdali

    2013-12-01

    Full Text Available Software has played an essential role in engineering, economic development, stock market growth and military applications. Mature software industry count on highly predictive software effort estimation models. Correct estimation of software effort lead to correct estimation of budget and development time. It also allows companies to develop appropriate time plan for marketing campaign. Now a day it became a great challenge to get these estimates due to the increasing number of attributes which affect the software development life cycle. Software cost estimation models should be able to provide sufficient confidence on its prediction capabilities. Recently, Computational Intelligence (CI paradigms were explored to handle the software effort estimation problem with promising results. In this paper we evolve two new models for software effort estimation using Multigene Symbolic Regression Genetic Programming (GP. One model utilizes the Source Line Of Code (SLOC as input variable to estimate the Effort (E; while the second model utilize the Inputs, Outputs, Files, and User Inquiries to estimate the Function Point (FP. The proposed GP models show better estimation capabilities compared to other reported models in the literature. The validation results are accepted based Albrecht data set.

  9. Advanced fuel cycle cost estimation model and its cost estimation results for three nuclear fuel cycles using a dynamic model in Korea

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sungki, E-mail: sgkim1@kaeri.re.kr [Korea Atomic Energy Research Institute, 1045 Daedeokdaero, Yuseong-gu, Daejeon 305-353 (Korea, Republic of); Ko, Wonil [Korea Atomic Energy Research Institute, 1045 Daedeokdaero, Yuseong-gu, Daejeon 305-353 (Korea, Republic of); Youn, Saerom; Gao, Ruxing [University of Science and Technology, 217 Gajungro, Yuseong-gu, Daejeon 305-350 (Korea, Republic of); Bang, Sungsig, E-mail: ssbang@kaist.ac.kr [Korea Advanced Institute of Science and Technology, Department of Business and Technology Management, 291 Deahak-ro, Yuseong-gu, Daejeon 305-701 (Korea, Republic of)

    2015-11-15

    Highlights: • The nuclear fuel cycle cost using a new cost estimation model was analyzed. • The material flows of three nuclear fuel cycle options were calculated. • The generation cost of once-through was estimated to be 66.88 mills/kW h. • The generation cost of pyro-SFR recycling was estimated to be 78.06 mills/kW h. • The reactor cost was identified as the main cost driver of pyro-SFR recycling. - Abstract: The present study analyzes advanced nuclear fuel cycle cost estimation models such as the different discount rate model and its cost estimation results. To do so, an analysis of the nuclear fuel cycle cost of three options (direct disposal (once through), PWR–MOX (Mixed OXide fuel), and Pyro-SFR (Sodium-cooled Fast Reactor)) from the viewpoint of economic sense, focusing on the cost estimation model, was conducted using a dynamic model. From an analysis of the fuel cycle cost estimation results, it was found that some cost gap exists between the traditional same discount rate model and the advanced different discount rate model. However, this gap does not change the priority of the nuclear fuel cycle option from the viewpoint of economics. In addition, the fuel cycle costs of OT (Once-Through) and Pyro-SFR recycling based on the most likely value using a probabilistic cost estimation except for reactor costs were calculated to be 8.75 mills/kW h and 8.30 mills/kW h, respectively. Namely, the Pyro-SFR recycling option was more economical than the direct disposal option. However, if the reactor cost is considered, the economic sense in the generation cost between the two options (direct disposal vs. Pyro-SFR recycling) can be changed because of the high reactor cost of an SFR.

  10. inverse gaussian model for small area estimation via gibbs sampling

    African Journals Online (AJOL)

    ADMIN

    ... for the factor effects is motivated from the context of the ... extremely useful in small area estimation theory, where one can ... study as the ten regions stratified by six education classes. .... The expectation of a function g(θ) of the parameters is ...... money income. .... included in the paper in the interest of saving space.

  11. Monte Carlo estimation of the conditional Rasch model

    NARCIS (Netherlands)

    Akkermans, W.

    1998-01-01

    In order to obtain conditional maximum likelihood estimates, the conditioning constants are needed. Geyer and Thompson (1992) proposed a Markov chain Monte Carlo method that can be used to approximate these constants when they are difficult to calculate exactly. In the present paper, their method is

  12. Estimating net present value variability for deterministic models

    NARCIS (Netherlands)

    van Groenendaal, W.J.H.

    1995-01-01

    For decision makers the variability in the net present value (NPV) of an investment project is an indication of the project's risk. So-called risk analysis is one way to estimate this variability. However, risk analysis requires knowledge about the stochastic character of the inputs. For large, long

  13. Coupling diffusion and maximum entropy models to estimate thermal inertia

    Science.gov (United States)

    Thermal inertia is a physical property of soil at the land surface related to water content. We have developed a method for estimating soil thermal inertia using two daily measurements of surface temperature, to capture the diurnal range, and diurnal time series of net radiation and specific humidi...

  14. A Microeconomic Interpretation of the Maximum Entropy Estimator of Multinomial Logit Models and Its Equivalence to the Maximum Likelihood Estimator

    Directory of Open Access Journals (Sweden)

    Louis de Grange

    2010-09-01

    Full Text Available Maximum entropy models are often used to describe supply and demand behavior in urban transportation and land use systems. However, they have been criticized for not representing behavioral rules of system agents and because their parameters seems to adjust only to modeler-imposed constraints. In response, it is demonstrated that the solution to the entropy maximization problem with linear constraints is a multinomial logit model whose parameters solve the likelihood maximization problem of this probabilistic model. But this result neither provides a microeconomic interpretation of the entropy maximization problem nor explains the equivalence of these two optimization problems. This work demonstrates that an analysis of the dual of the entropy maximization problem yields two useful alternative explanations of its solution. The first shows that the maximum entropy estimators of the multinomial logit model parameters reproduce rational user behavior, while the second shows that the likelihood maximization problem for multinomial logit models is the dual of the entropy maximization problem.

  15. CML based estimation of extended Rasch models with the eRm package in R

    Directory of Open Access Journals (Sweden)

    PATRICK MAIR

    2007-03-01

    Full Text Available This paper presents an open source tool for computing extended Rasch models. It is realized in R (R Development Core Team, 2006 and available as package eRm. In addition to ordinary Rasch models extended models such as linear logistic test models, (linear rating scale models and (linear partial credit models can be estimated. A striking feature of this package is the implementation of conditional maximum likelihood estimation techniques which relate directly to Rasch's original concept of specific objectivity. The mathematical and epistemological benefits of this estimation method are discussed. Moreover, the capabilities of the eRm routine with respect to structural item response designs are demonstrated.

  16. Model Independent Foreground Power Spectrum Estimation using WMAP 5-year Data

    CERN Document Server

    Ghosh, Tuhin; Jain, Pankaj; Souradeep, Tarun

    2009-01-01

    In this paper, we propose & implement on WMAP 5-year data, a model independent approach of foreground power spectrum estimation for multifrequency observations of CMB experiments. Recently a model independent approach of CMB power spectrum estimation was proposed by Saha et al. 2006. This methodology demonstrates that CMB power spectrum can be reliably estimated solely from WMAP data without assuming any template models for the foreground components. In the current paper, we extend this work to estimate the galactic foreground power spectrum using the WMAP 5 year maps following a self contained analysis. We apply the model independent method in harmonic basis to estimate the foreground power spectrum and frequency dependence of combined foregrounds. We also study the behaviour of synchrotron spectral index variation over different regions of the sky. We compare our results with those obtained from MEM foreground maps which are formed in pixel space. We find that relative to our model independent estimates...

  17. Admissible Estimators in the General Multivariate Linear Model with Respect to Inequality Restricted Parameter Set

    Directory of Open Access Journals (Sweden)

    Liu Gang

    2009-01-01

    Full Text Available By using the methods of linear algebra and matrix inequality theory, we obtain the characterization of admissible estimators in the general multivariate linear model with respect to inequality restricted parameter set. In the classes of homogeneous and general linear estimators, the necessary and suffcient conditions that the estimators of regression coeffcient function are admissible are established.

  18. Realized mixed-frequency factor models for vast dimensional covariance estimation

    NARCIS (Netherlands)

    K. Bannouh (Karim); M.P.E. Martens (Martin); R.C.A. Oomen (Roel); D.J.C. van Dijk (Dick)

    2012-01-01

    textabstractWe introduce a Mixed-Frequency Factor Model (MFFM) to estimate vast dimensional covari- ance matrices of asset returns. The MFFM uses high-frequency (intraday) data to estimate factor (co)variances and idiosyncratic risk and low-frequency (daily) data to estimate the factor loadings. We

  19. Robust nonlinear autoregressive moving average model parameter estimation using stochastic recurrent artificial neural networks

    DEFF Research Database (Denmark)

    Chon, K H; Hoyer, D; Armoundas, A A;

    1999-01-01

    part of the stochastic ARMA model are first estimated via a three-layer artificial neural network (deterministic estimation step) and then reestimated using the prediction error as one of the inputs to the artificial neural networks in an iterative algorithm (stochastic estimation step). The prediction...

  20. Adaptive Disturbance Estimation for Offset-Free SISO Model Predictive Control

    DEFF Research Database (Denmark)

    Huusom, Jakob Kjøbsted; Poulsen, Niels Kjølstad; Jørgensen, Sten Bay;

    2011-01-01

    Offset free tracking in Model Predictive Control requires estimation of unmeasured disturbances or the inclusion of an integrator. An algorithm for estimation of an unknown disturbance based on adaptive estimation with time varying forgetting is introduced and benchmarked against the classical...