WorldWideScience

Sample records for models estimating effects

  1. Functional Mixed Effects Model for Small Area Estimation.

    Science.gov (United States)

    Maiti, Tapabrata; Sinha, Samiran; Zhong, Ping-Shou

    2016-09-01

    Functional data analysis has become an important area of research due to its ability of handling high dimensional and complex data structures. However, the development is limited in the context of linear mixed effect models, and in particular, for small area estimation. The linear mixed effect models are the backbone of small area estimation. In this article, we consider area level data, and fit a varying coefficient linear mixed effect model where the varying coefficients are semi-parametrically modeled via B-splines. We propose a method of estimating the fixed effect parameters and consider prediction of random effects that can be implemented using a standard software. For measuring prediction uncertainties, we derive an analytical expression for the mean squared errors, and propose a method of estimating the mean squared errors. The procedure is illustrated via a real data example, and operating characteristics of the method are judged using finite sample simulation studies.

  2. The problematic estimation of "imitation effects" in multilevel models

    Directory of Open Access Journals (Sweden)

    2003-09-01

    Full Text Available It seems plausible that a person's demographic behaviour may be influenced by that among other people in the community, for example because of an inclination to imitate. When estimating multilevel models from clustered individual data, some investigators might perhaps feel tempted to try to capture this effect by simply including on the right-hand side the average of the dependent variable, constructed by aggregation within the clusters. However, such modelling must be avoided. According to simulation experiments based on real fertility data from India, the estimated effect of this obviously endogenous variable can be very different from the true effect. Also the other community effect estimates can be strongly biased. An "imitation effect" can only be estimated under very special assumptions that in practice will be hard to defend.

  3. Estimation of Nonlinear Dynamic Panel Data Models with Individual Effects

    Directory of Open Access Journals (Sweden)

    Yi Hu

    2014-01-01

    Full Text Available This paper suggests a generalized method of moments (GMM based estimation for dynamic panel data models with individual specific fixed effects and threshold effects simultaneously. We extend Hansen’s (Hansen, 1999 original setup to models including endogenous regressors, specifically, lagged dependent variables. To address the problem of endogeneity of these nonlinear dynamic panel data models, we prove that the orthogonality conditions proposed by Arellano and Bond (1991 are valid. The threshold and slope parameters are estimated by GMM, and asymptotic distribution of the slope parameters is derived. Finite sample performance of the estimation is investigated through Monte Carlo simulations. It shows that the threshold and slope parameter can be estimated accurately and also the finite sample distribution of slope parameters is well approximated by the asymptotic distribution.

  4. Efficient estimation of feedback effects with application to climate models

    International Nuclear Information System (INIS)

    Cacugi, D.G.; Hall, M.C.G.

    1984-01-01

    This work presents an efficient method for calculating the sensitivity of a mathematical model's result to feedback. Feedback is defined in terms of an operator acting on the model's dependent variables. The sensitivity to feedback is defined as a functional derivative, and a method is presented to evaluate this derivative using adjoint functions. Typically, this method allows the individual effect of many different feedbacks to be estimated with a total additional computing time comparable to only one recalculation. The effects on a CO 2 -doubling experiment of actually incorporating surface albedo and water vapor feedbacks in radiative-convective model are compared with sensivities calculated using adjoint functions. These sensitivities predict the actual effects of feedback with at least the correct sign and order of magnitude. It is anticipated that this method of estimation the effect of feedback will be useful for more complex models where extensive recalculations for each of a variety of different feedbacks is impractical

  5. Estimation and Inference for Very Large Linear Mixed Effects Models

    OpenAIRE

    Gao, K.; Owen, A. B.

    2016-01-01

    Linear mixed models with large imbalanced crossed random effects structures pose severe computational problems for maximum likelihood estimation and for Bayesian analysis. The costs can grow as fast as $N^{3/2}$ when there are N observations. Such problems arise in any setting where the underlying factors satisfy a many to many relationship (instead of a nested one) and in electronic commerce applications, the N can be quite large. Methods that do not account for the correlation structure can...

  6. Nonparametric Estimation of Distributions in Random Effects Models

    KAUST Repository

    Hart, Jeffrey D.

    2011-01-01

    We propose using minimum distance to obtain nonparametric estimates of the distributions of components in random effects models. A main setting considered is equivalent to having a large number of small datasets whose locations, and perhaps scales, vary randomly, but which otherwise have a common distribution. Interest focuses on estimating the distribution that is common to all datasets, knowledge of which is crucial in multiple testing problems where a location/scale invariant test is applied to every small dataset. A detailed algorithm for computing minimum distance estimates is proposed, and the usefulness of our methodology is illustrated by a simulation study and an analysis of microarray data. Supplemental materials for the article, including R-code and a dataset, are available online. © 2011 American Statistical Association.

  7. Consistency in Estimation and Model Selection of Dynamic Panel Data Models with Fixed Effects

    Directory of Open Access Journals (Sweden)

    Guangjie Li

    2015-07-01

    Full Text Available We examine the relationship between consistent parameter estimation and model selection for autoregressive panel data models with fixed effects. We find that the transformation of fixed effects proposed by Lancaster (2002 does not necessarily lead to consistent estimation of common parameters when some true exogenous regressors are excluded. We propose a data dependent way to specify the prior of the autoregressive coefficient and argue for comparing different model specifications before parameter estimation. Model selection properties of Bayes factors and Bayesian information criterion (BIC are investigated. When model uncertainty is substantial, we recommend the use of Bayesian Model Averaging to obtain point estimators with lower root mean squared errors (RMSE. We also study the implications of different levels of inclusion probabilities by simulations.

  8. Estimating overall exposure effects for the clustered and censored outcome using random effect Tobit regression models.

    Science.gov (United States)

    Wang, Wei; Griswold, Michael E

    2016-11-30

    The random effect Tobit model is a regression model that accommodates both left- and/or right-censoring and within-cluster dependence of the outcome variable. Regression coefficients of random effect Tobit models have conditional interpretations on a constructed latent dependent variable and do not provide inference of overall exposure effects on the original outcome scale. Marginalized random effects model (MREM) permits likelihood-based estimation of marginal mean parameters for the clustered data. For random effect Tobit models, we extend the MREM to marginalize over both the random effects and the normal space and boundary components of the censored response to estimate overall exposure effects at population level. We also extend the 'Average Predicted Value' method to estimate the model-predicted marginal means for each person under different exposure status in a designated reference group by integrating over the random effects and then use the calculated difference to assess the overall exposure effect. The maximum likelihood estimation is proposed utilizing a quasi-Newton optimization algorithm with Gauss-Hermite quadrature to approximate the integration of the random effects. We use these methods to carefully analyze two real datasets. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  9. School Processes Mediate School Compositional Effects: Model Specification and Estimation

    Science.gov (United States)

    Liu, Hongqiang; Van Damme, Jan; Gielen, Sarah; Van Den Noortgate, Wim

    2015-01-01

    School composition effects have been consistently verified, but few studies ever attempted to study how school composition affects school achievement. Based on prior research findings, we employed multilevel mediation modeling to examine whether school processes mediate the effect of school composition upon school outcomes based on the data of 28…

  10. Effects of uncertainty in model predictions of individual tree volume on large area volume estimates

    Science.gov (United States)

    Ronald E. McRoberts; James A. Westfall

    2014-01-01

    Forest inventory estimates of tree volume for large areas are typically calculated by adding model predictions of volumes for individual trees. However, the uncertainty in the model predictions is generally ignored with the result that the precision of the large area volume estimates is overestimated. The primary study objective was to estimate the effects of model...

  11. Uncertainty and validation. Effect of model complexity on uncertainty estimates

    International Nuclear Information System (INIS)

    Elert, M.

    1996-09-01

    In the Model Complexity subgroup of BIOMOVS II, models of varying complexity have been applied to the problem of downward transport of radionuclides in soils. A scenario describing a case of surface contamination of a pasture soil was defined. Three different radionuclides with different environmental behavior and radioactive half-lives were considered: Cs-137, Sr-90 and I-129. The intention was to give a detailed specification of the parameters required by different kinds of model, together with reasonable values for the parameter uncertainty. A total of seven modelling teams participated in the study using 13 different models. Four of the modelling groups performed uncertainty calculations using nine different modelling approaches. The models used range in complexity from analytical solutions of a 2-box model using annual average data to numerical models coupling hydrology and transport using data varying on a daily basis. The complex models needed to consider all aspects of radionuclide transport in a soil with a variable hydrology are often impractical to use in safety assessments. Instead simpler models, often box models, are preferred. The comparison of predictions made with the complex models and the simple models for this scenario show that the predictions in many cases are very similar, e g in the predictions of the evolution of the root zone concentration. However, in other cases differences of many orders of magnitude can appear. One example is the prediction of the flux to the groundwater of radionuclides being transported through the soil column. Some issues that have come to focus in this study: There are large differences in the predicted soil hydrology and as a consequence also in the radionuclide transport, which suggests that there are large uncertainties in the calculation of effective precipitation and evapotranspiration. The approach used for modelling the water transport in the root zone has an impact on the predictions of the decline in root

  12. Effective single scattering albedo estimation using regional climate model

    CSIR Research Space (South Africa)

    Tesfaye, M

    2011-09-01

    Full Text Available In this study, by modifying the optical parameterization of Regional Climate model (RegCM), the authors have computed and compared the Effective Single-Scattering Albedo (ESSA) which is a representative of VIS spectral region. The arid, semi...

  13. Uncertainty and validation. Effect of model complexity on uncertainty estimates

    Energy Technology Data Exchange (ETDEWEB)

    Elert, M. [Kemakta Konsult AB, Stockholm (Sweden)] [ed.

    1996-09-01

    In the Model Complexity subgroup of BIOMOVS II, models of varying complexity have been applied to the problem of downward transport of radionuclides in soils. A scenario describing a case of surface contamination of a pasture soil was defined. Three different radionuclides with different environmental behavior and radioactive half-lives were considered: Cs-137, Sr-90 and I-129. The intention was to give a detailed specification of the parameters required by different kinds of model, together with reasonable values for the parameter uncertainty. A total of seven modelling teams participated in the study using 13 different models. Four of the modelling groups performed uncertainty calculations using nine different modelling approaches. The models used range in complexity from analytical solutions of a 2-box model using annual average data to numerical models coupling hydrology and transport using data varying on a daily basis. The complex models needed to consider all aspects of radionuclide transport in a soil with a variable hydrology are often impractical to use in safety assessments. Instead simpler models, often box models, are preferred. The comparison of predictions made with the complex models and the simple models for this scenario show that the predictions in many cases are very similar, e g in the predictions of the evolution of the root zone concentration. However, in other cases differences of many orders of magnitude can appear. One example is the prediction of the flux to the groundwater of radionuclides being transported through the soil column. Some issues that have come to focus in this study: There are large differences in the predicted soil hydrology and as a consequence also in the radionuclide transport, which suggests that there are large uncertainties in the calculation of effective precipitation and evapotranspiration. The approach used for modelling the water transport in the root zone has an impact on the predictions of the decline in root

  14. Model Specifications for Estimating Labor Market Returns to Associate Degrees: How Robust Are Fixed Effects Estimates? A CAPSEE Working Paper

    Science.gov (United States)

    Belfield, Clive; Bailey, Thomas

    2017-01-01

    Recently, studies have adopted fixed effects modeling to identify the returns to college. This method has the advantage over ordinary least squares estimates in that unobservable, individual-level characteristics that may bias the estimated returns are differenced out. But the method requires extensive longitudinal data and involves complex…

  15. E-Model MOS Estimate Precision Improvement and Modelling of Jitter Effects

    Directory of Open Access Journals (Sweden)

    Adrian Kovac

    2012-01-01

    Full Text Available This paper deals with the ITU-T E-model, which is used for non-intrusive MOS VoIP call quality estimation on IP networks. The pros of E-model are computational simplicity and usability on real-time traffic. The cons, as shown in our previous work, are the inability of E-model to reflect effects of network jitter present on real traffic flows and jitter-buffer behavior on end user devices. These effects are visible mostly on traffic over WAN, internet and radio networks and cause the E-model MOS call quality estimate to be noticeably too optimistic. In this paper, we propose a modification to E-model using previously proposed Pplef (effective packet loss using jitter and jitter-buffer model based on Pareto/D/1/K system. We subsequently perform optimization of newly added parameters reflecting jitter effects into E-model by using PESQ intrusive measurement method as a reference for selected audio codecs. Function fitting and parameter optimization is performed under varying delay, packet loss, jitter and different jitter-buffer sizes for both, correlated and uncorrelated long-tailed network traffic.

  16. Estimating the Effects of Parental Divorce and Death With Fixed Effects Models

    OpenAIRE

    Amato, Paul R.; Anthony, Christopher J.

    2014-01-01

    The authors used child fixed effects models to estimate the effects of parental divorce and death on a variety of outcomes using 2 large national data sets: (a) the Early Childhood Longitudinal Study, Kindergarten Cohort (kindergarten through the 5th grade) and (b) the National Educational Longitudinal Study (8th grade to the senior year of high school). In both data sets, divorce and death were associated with multiple negative outcomes among children. Although evidence for a causal effect o...

  17. Nonparametric Estimation of Distributions in Random Effects Models

    KAUST Repository

    Hart, Jeffrey D.; Cañ ette, Isabel

    2011-01-01

    to every small dataset. A detailed algorithm for computing minimum distance estimates is proposed, and the usefulness of our methodology is illustrated by a simulation study and an analysis of microarray data. Supplemental materials for the article

  18. Assessing Mediational Models: Testing and Interval Estimation for Indirect Effects.

    Science.gov (United States)

    Biesanz, Jeremy C; Falk, Carl F; Savalei, Victoria

    2010-08-06

    Theoretical models specifying indirect or mediated effects are common in the social sciences. An indirect effect exists when an independent variable's influence on the dependent variable is mediated through an intervening variable. Classic approaches to assessing such mediational hypotheses ( Baron & Kenny, 1986 ; Sobel, 1982 ) have in recent years been supplemented by computationally intensive methods such as bootstrapping, the distribution of the product methods, and hierarchical Bayesian Markov chain Monte Carlo (MCMC) methods. These different approaches for assessing mediation are illustrated using data from Dunn, Biesanz, Human, and Finn (2007). However, little is known about how these methods perform relative to each other, particularly in more challenging situations, such as with data that are incomplete and/or nonnormal. This article presents an extensive Monte Carlo simulation evaluating a host of approaches for assessing mediation. We examine Type I error rates, power, and coverage. We study normal and nonnormal data as well as complete and incomplete data. In addition, we adapt a method, recently proposed in statistical literature, that does not rely on confidence intervals (CIs) to test the null hypothesis of no indirect effect. The results suggest that the new inferential method-the partial posterior p value-slightly outperforms existing ones in terms of maintaining Type I error rates while maximizing power, especially with incomplete data. Among confidence interval approaches, the bias-corrected accelerated (BC a ) bootstrapping approach often has inflated Type I error rates and inconsistent coverage and is not recommended; In contrast, the bootstrapped percentile confidence interval and the hierarchical Bayesian MCMC method perform best overall, maintaining Type I error rates, exhibiting reasonable power, and producing stable and accurate coverage rates.

  19. A Simple Method to Estimate Large Fixed Effects Models Applied to Wage Determinants and Matching

    OpenAIRE

    Mittag, Nikolas

    2016-01-01

    Models with high dimensional sets of fixed effects are frequently used to examine, among others, linked employer-employee data, student outcomes and migration. Estimating these models is computationally difficult, so simplifying assumptions that are likely to cause bias are often invoked to make computation feasible and specification tests are rarely conducted. I present a simple method to estimate large two-way fixed effects (TWFE) and worker-firm match effect models without additional assum...

  20. Semi-parametric estimation of random effects in a logistic regression model using conditional inference

    DEFF Research Database (Denmark)

    Petersen, Jørgen Holm

    2016-01-01

    This paper describes a new approach to the estimation in a logistic regression model with two crossed random effects where special interest is in estimating the variance of one of the effects while not making distributional assumptions about the other effect. A composite likelihood is studied...

  1. Estimating the Effects of Parental Divorce and Death With Fixed Effects Models.

    Science.gov (United States)

    Amato, Paul R; Anthony, Christopher J

    2014-04-01

    The authors used child fixed effects models to estimate the effects of parental divorce and death on a variety of outcomes using 2 large national data sets: (a) the Early Childhood Longitudinal Study, Kindergarten Cohort (kindergarten through the 5th grade) and (b) the National Educational Longitudinal Study (8th grade to the senior year of high school). In both data sets, divorce and death were associated with multiple negative outcomes among children. Although evidence for a causal effect of divorce on children was reasonably strong, effect sizes were small in magnitude. A second analysis revealed a substantial degree of variability in children's outcomes following parental divorce, with some children declining, others improving, and most not changing at all. The estimated effects of divorce appeared to be strongest among children with the highest propensity to experience parental divorce.

  2. Estimating the Effects of Parental Divorce and Death With Fixed Effects Models

    Science.gov (United States)

    Amato, Paul R.; Anthony, Christopher J.

    2014-01-01

    The authors used child fixed effects models to estimate the effects of parental divorce and death on a variety of outcomes using 2 large national data sets: (a) the Early Childhood Longitudinal Study, Kindergarten Cohort (kindergarten through the 5th grade) and (b) the National Educational Longitudinal Study (8th grade to the senior year of high school). In both data sets, divorce and death were associated with multiple negative outcomes among children. Although evidence for a causal effect of divorce on children was reasonably strong, effect sizes were small in magnitude. A second analysis revealed a substantial degree of variability in children’s outcomes following parental divorce, with some children declining, others improving, and most not changing at all. The estimated effects of divorce appeared to be strongest among children with the highest propensity to experience parental divorce. PMID:24659827

  3. Estimating dose painting effects in radiotherapy: a mathematical model.

    Directory of Open Access Journals (Sweden)

    Juan Carlos López Alfonso

    Full Text Available Tumor heterogeneity is widely considered to be a determinant factor in tumor progression and in particular in its recurrence after therapy. Unfortunately, current medical techniques are unable to deduce clinically relevant information about tumor heterogeneity by means of non-invasive methods. As a consequence, when radiotherapy is used as a treatment of choice, radiation dosimetries are prescribed under the assumption that the malignancy targeted is of a homogeneous nature. In this work we discuss the effects of different radiation dose distributions on heterogeneous tumors by means of an individual cell-based model. To that end, a case is considered where two tumor cell phenotypes are present, which we assume to strongly differ in their respective cell cycle duration and radiosensitivity properties. We show herein that, as a result of such differences, the spatial distribution of the corresponding phenotypes, whence the resulting tumor heterogeneity can be predicted as growth proceeds. In particular, we show that if we start from a situation where a majority of ordinary cancer cells (CCs and a minority of cancer stem cells (CSCs are randomly distributed, and we assume that the length of CSC cycle is significantly longer than that of CCs, then CSCs become concentrated at an inner region as tumor grows. As a consequence we obtain that if CSCs are assumed to be more resistant to radiation than CCs, heterogeneous dosimetries can be selected to enhance tumor control by boosting radiation in the region occupied by the more radioresistant tumor cell phenotype. It is also shown that, when compared with homogeneous dose distributions as those being currently delivered in clinical practice, such heterogeneous radiation dosimetries fare always better than their homogeneous counterparts. Finally, limitations to our assumptions and their resulting clinical implications will be discussed.

  4. Model ecosystem approach to estimate community level effects of radiation

    Energy Technology Data Exchange (ETDEWEB)

    Masahiro, Doi; Nobuyuki, Tanaka; Shoichi, Fuma; Nobuyoshi, Ishii; Hiroshi, Takeda; Zenichiro, Kawabata [National Institute of Radiological Sciences, Environmental and Toxicological Sciences Research Group, Chiba (Japan)

    2004-07-01

    Mathematical computer model is developed to simulate the population dynamics and dynamic mass budgets of the microbial community realized as a self sustainable aquatic ecological system in the tube. Autotrophic algae, heterotrophic protozoa and sapro-trophic bacteria live symbiotically with inter-species' interactions as predator-prey relationship, competition for the common resource, autolysis of detritus and detritus-grazing food chain, etc. The simulation model is the individual-based parallel model, built in the demographic stochasticity, environmental stochasticity by dividing the aquatic environment into patches. Validity of the model is checked by the multifaceted data of the microcosm experiments. In the analysis, intrinsic parameters of umbrella endpoints (lethality, morbidity, reproductive growth, mutation) are manipulated at the individual level, and tried to find the population level, community level and ecosystem level disorders of ecologically crucial parameters (e.g. intrinsic growth rate, carrying capacity, variation, etc.) that related to the probability of population extinction. (author)

  5. Model ecosystem approach to estimate community level effects of radiation

    International Nuclear Information System (INIS)

    Masahiro, Doi; Nobuyuki, Tanaka; Shoichi, Fuma; Nobuyoshi, Ishii; Hiroshi, Takeda; Zenichiro, Kawabata

    2004-01-01

    Mathematical computer model is developed to simulate the population dynamics and dynamic mass budgets of the microbial community realized as a self sustainable aquatic ecological system in the tube. Autotrophic algae, heterotrophic protozoa and sapro-trophic bacteria live symbiotically with inter-species' interactions as predator-prey relationship, competition for the common resource, autolysis of detritus and detritus-grazing food chain, etc. The simulation model is the individual-based parallel model, built in the demographic stochasticity, environmental stochasticity by dividing the aquatic environment into patches. Validity of the model is checked by the multifaceted data of the microcosm experiments. In the analysis, intrinsic parameters of umbrella endpoints (lethality, morbidity, reproductive growth, mutation) are manipulated at the individual level, and tried to find the population level, community level and ecosystem level disorders of ecologically crucial parameters (e.g. intrinsic growth rate, carrying capacity, variation, etc.) that related to the probability of population extinction. (author)

  6. Effect of heteroscedasticity treatment in residual error models on model calibration and prediction uncertainty estimation

    Science.gov (United States)

    Sun, Ruochen; Yuan, Huiling; Liu, Xiaoli

    2017-11-01

    The heteroscedasticity treatment in residual error models directly impacts the model calibration and prediction uncertainty estimation. This study compares three methods to deal with the heteroscedasticity, including the explicit linear modeling (LM) method and nonlinear modeling (NL) method using hyperbolic tangent function, as well as the implicit Box-Cox transformation (BC). Then a combined approach (CA) combining the advantages of both LM and BC methods has been proposed. In conjunction with the first order autoregressive model and the skew exponential power (SEP) distribution, four residual error models are generated, namely LM-SEP, NL-SEP, BC-SEP and CA-SEP, and their corresponding likelihood functions are applied to the Variable Infiltration Capacity (VIC) hydrologic model over the Huaihe River basin, China. Results show that the LM-SEP yields the poorest streamflow predictions with the widest uncertainty band and unrealistic negative flows. The NL and BC methods can better deal with the heteroscedasticity and hence their corresponding predictive performances are improved, yet the negative flows cannot be avoided. The CA-SEP produces the most accurate predictions with the highest reliability and effectively avoids the negative flows, because the CA approach is capable of addressing the complicated heteroscedasticity over the study basin.

  7. Estimating safety effects of pavement management factors utilizing Bayesian random effect models.

    Science.gov (United States)

    Jiang, Ximiao; Huang, Baoshan; Zaretzki, Russell L; Richards, Stephen; Yan, Xuedong

    2013-01-01

    Previous studies of pavement management factors that relate to the occurrence of traffic-related crashes are rare. Traditional research has mostly employed summary statistics of bidirectional pavement quality measurements in extended longitudinal road segments over a long time period, which may cause a loss of important information and result in biased parameter estimates. The research presented in this article focuses on crash risk of roadways with overall fair to good pavement quality. Real-time and location-specific data were employed to estimate the effects of pavement management factors on the occurrence of crashes. This research is based on the crash data and corresponding pavement quality data for the Tennessee state route highways from 2004 to 2009. The potential temporal and spatial correlations among observations caused by unobserved factors were considered. Overall 6 models were built accounting for no correlation, temporal correlation only, and both the temporal and spatial correlations. These models included Poisson, negative binomial (NB), one random effect Poisson and negative binomial (OREP, ORENB), and two random effect Poisson and negative binomial (TREP, TRENB) models. The Bayesian method was employed to construct these models. The inference is based on the posterior distribution from the Markov chain Monte Carlo (MCMC) simulation. These models were compared using the deviance information criterion. Analysis of the posterior distribution of parameter coefficients indicates that the pavement management factors indexed by Present Serviceability Index (PSI) and Pavement Distress Index (PDI) had significant impacts on the occurrence of crashes, whereas the variable rutting depth was not significant. Among other factors, lane width, median width, type of terrain, and posted speed limit were significant in affecting crash frequency. The findings of this study indicate that a reduction in pavement roughness would reduce the likelihood of traffic

  8. Robust estimation and moment selection in dynamic fixed-effects panel data models

    NARCIS (Netherlands)

    Cizek, Pavel; Aquaro, Michele

    Considering linear dynamic panel data models with fixed effects, existing outlier–robust estimators based on the median ratio of two consecutive pairs of first-differenced data are extended to higher-order differencing. The estimation procedure is thus based on many pairwise differences and their

  9. Bayesian Modeling for Identification and Estimation of the Learning Effects of Pointing Tasks

    Science.gov (United States)

    Kyo, Koki

    Recently, in the field of human-computer interaction, a model containing the systematic factor and human factor has been proposed to evaluate the performance of the input devices of a computer. This is called the SH-model. In this paper, in order to extend the range of application of the SH-model, we propose some new models based on the Box-Cox transformation and apply a Bayesian modeling method for identification and estimation of the learning effects of pointing tasks. We consider the parameters describing the learning effect as random variables and introduce smoothness priors for them. Illustrative results show that the newly-proposed models work well.

  10. A Bayesian localized conditional autoregressive model for estimating the health effects of air pollution.

    Science.gov (United States)

    Lee, Duncan; Rushworth, Alastair; Sahu, Sujit K

    2014-06-01

    Estimation of the long-term health effects of air pollution is a challenging task, especially when modeling spatial small-area disease incidence data in an ecological study design. The challenge comes from the unobserved underlying spatial autocorrelation structure in these data, which is accounted for using random effects modeled by a globally smooth conditional autoregressive model. These smooth random effects confound the effects of air pollution, which are also globally smooth. To avoid this collinearity a Bayesian localized conditional autoregressive model is developed for the random effects. This localized model is flexible spatially, in the sense that it is not only able to model areas of spatial smoothness, but also it is able to capture step changes in the random effects surface. This methodological development allows us to improve the estimation performance of the covariate effects, compared to using traditional conditional auto-regressive models. These results are established using a simulation study, and are then illustrated with our motivating study on air pollution and respiratory ill health in Greater Glasgow, Scotland in 2011. The model shows substantial health effects of particulate matter air pollution and nitrogen dioxide, whose effects have been consistently attenuated by the currently available globally smooth models. © 2014, The Authors Biometrics published by Wiley Periodicals, Inc. on behalf of International Biometric Society.

  11. Meta-analysis of choice set generation effects on route choice model estimates and predictions

    DEFF Research Database (Denmark)

    Prato, Carlo Giacomo

    2012-01-01

    are applied for model estimation and results are compared to the ‘true model estimates’. Last, predictions from the simulation of models estimated with objective choice sets are compared to the ‘postulated predicted routes’. A meta-analytical approach allows synthesizing the effect of judgments......Large scale applications of behaviorally realistic transport models pose several challenges to transport modelers on both the demand and the supply sides. On the supply side, path-based solutions to the user assignment equilibrium problem help modelers in enhancing the route choice behavior...... modeling, but require them to generate choice sets by selecting a path generation technique and its parameters according to personal judgments. This paper proposes a methodology and an experimental setting to provide general indications about objective judgments for an effective route choice set generation...

  12. EFFECTS OF OCEAN TIDE MODELS ON GNSS-ESTIMATED ZTD AND PWV IN TURKEY

    Directory of Open Access Journals (Sweden)

    G. Gurbuz

    2015-12-01

    Full Text Available Global Navigation Satellite System (GNSS observations can precisely estimate the total zenith tropospheric delay (ZTD and precipitable water vapour (PWV for weather prediction and atmospheric research as a continuous and all-weather technique. However, apart from GNSS technique itself, estimations of ZTD and PWV are subject to effects of geophysical models with large uncertainties, particularly imprecise ocean tide models in Turkey. In this paper, GNSS data from Jan. 1st to Dec. 31st of 2014 are processed at 4 co-located GNSS stations (GISM, DIYB, GANM, and ADAN with radiosonde from Turkish Met-Office along with several nearby IGS stations. The GAMIT/GLOBK software has been used to process GNSS data of 30-second sample using the Vienna Mapping Function and 10° elevation cut-off angle. Also tidal and non-tidal atmospheric pressure loadings (ATML at the observation level are also applied in GAMIT/GLOBK. Several widely used ocean tide models are used to evaluate their effects on GNSS-estimated ZTD and PWV estimation, such as IERS recommended FES2004, NAO99b from a barotropic hydrodynamic model, CSR4.0 obtained from TOPEX/Poseidon altimetry with the model FES94.1 as the reference model and GOT00 which is again long wavelength adjustments of FES94.1 using TOPEX/Poseidon data at 0.5 by 0.5 degree grid. The ZTD and PWV computed from radiosonde profile observations are regarded as reference values for the comparison and validation. In the processing phase, five different strategies are taken without ocean tide model and with four aforementioned ocean tide models, respectively, which are used to evaluate ocean tide models effects on GNSS-estimated ZTD and PWV estimation through comparing with co-located Radiosonde. Results showed that ocean tide models have greatly affected the estimation of the ZTD in centimeter level and thus the precipitable water vapour in millimeter level, respectively at stations near coasts. The ocean tide model FES2004 that is

  13. Estimating effectiveness in HIV prevention trials with a Bayesian hierarchical compound Poisson frailty model

    Science.gov (United States)

    Coley, Rebecca Yates; Browna, Elizabeth R.

    2016-01-01

    Inconsistent results in recent HIV prevention trials of pre-exposure prophylactic interventions may be due to heterogeneity in risk among study participants. Intervention effectiveness is most commonly estimated with the Cox model, which compares event times between populations. When heterogeneity is present, this population-level measure underestimates intervention effectiveness for individuals who are at risk. We propose a likelihood-based Bayesian hierarchical model that estimates the individual-level effectiveness of candidate interventions by accounting for heterogeneity in risk with a compound Poisson-distributed frailty term. This model reflects the mechanisms of HIV risk and allows that some participants are not exposed to HIV and, therefore, have no risk of seroconversion during the study. We assess model performance via simulation and apply the model to data from an HIV prevention trial. PMID:26869051

  14. Robust Estimation and Moment Selection in Dynamic Fixed-effects Panel Data Models

    NARCIS (Netherlands)

    Cizek, P.; Aquaro, M.

    2015-01-01

    This paper extends an existing outlier-robust estimator of linear dynamic panel data models with fixed effects, which is based on the median ratio of two consecutive pairs of first-differenced data. To improve its precision and robust properties, a general procedure based on many pairwise

  15. Estimation of stochastic frontier models with fixed-effects through Monte Carlo Maximum Likelihood

    NARCIS (Netherlands)

    Emvalomatis, G.; Stefanou, S.E.; Oude Lansink, A.G.J.M.

    2011-01-01

    Estimation of nonlinear fixed-effects models is plagued by the incidental parameters problem. This paper proposes a procedure for choosing appropriate densities for integrating the incidental parameters from the likelihood function in a general context. The densities are based on priors that are

  16. A Comparison of Methods for Estimating Quadratic Effects in Nonlinear Structural Equation Models

    Science.gov (United States)

    Harring, Jeffrey R.; Weiss, Brandi A.; Hsu, Jui-Chen

    2012-01-01

    Two Monte Carlo simulations were performed to compare methods for estimating and testing hypotheses of quadratic effects in latent variable regression models. The methods considered in the current study were (a) a 2-stage moderated regression approach using latent variable scores, (b) an unconstrained product indicator approach, (c) a latent…

  17. The use of copulas to practical estimation of multivariate stochastic differential equation mixed effects models

    International Nuclear Information System (INIS)

    Rupšys, P.

    2015-01-01

    A system of stochastic differential equations (SDE) with mixed-effects parameters and multivariate normal copula density function were used to develop tree height model for Scots pine trees in Lithuania. A two-step maximum likelihood parameter estimation method is used and computational guidelines are given. After fitting the conditional probability density functions to outside bark diameter at breast height, and total tree height, a bivariate normal copula distribution model was constructed. Predictions from the mixed-effects parameters SDE tree height model calculated during this research were compared to the regression tree height equations. The results are implemented in the symbolic computational language MAPLE

  18. The use of copulas to practical estimation of multivariate stochastic differential equation mixed effects models

    Energy Technology Data Exchange (ETDEWEB)

    Rupšys, P. [Aleksandras Stulginskis University, Studenų g. 11, Akademija, Kaunas district, LT – 53361 Lithuania (Lithuania)

    2015-10-28

    A system of stochastic differential equations (SDE) with mixed-effects parameters and multivariate normal copula density function were used to develop tree height model for Scots pine trees in Lithuania. A two-step maximum likelihood parameter estimation method is used and computational guidelines are given. After fitting the conditional probability density functions to outside bark diameter at breast height, and total tree height, a bivariate normal copula distribution model was constructed. Predictions from the mixed-effects parameters SDE tree height model calculated during this research were compared to the regression tree height equations. The results are implemented in the symbolic computational language MAPLE.

  19. Fuel Burn Estimation Model

    Science.gov (United States)

    Chatterji, Gano

    2011-01-01

    Conclusions: Validated the fuel estimation procedure using flight test data. A good fuel model can be created if weight and fuel data are available. Error in assumed takeoff weight results in similar amount of error in the fuel estimate. Fuel estimation error bounds can be determined.

  20. Simple, Efficient Estimators of Treatment Effects in Randomized Trials Using Generalized Linear Models to Leverage Baseline Variables

    Science.gov (United States)

    Rosenblum, Michael; van der Laan, Mark J.

    2010-01-01

    Models, such as logistic regression and Poisson regression models, are often used to estimate treatment effects in randomized trials. These models leverage information in variables collected before randomization, in order to obtain more precise estimates of treatment effects. However, there is the danger that model misspecification will lead to bias. We show that certain easy to compute, model-based estimators are asymptotically unbiased even when the working model used is arbitrarily misspecified. Furthermore, these estimators are locally efficient. As a special case of our main result, we consider a simple Poisson working model containing only main terms; in this case, we prove the maximum likelihood estimate of the coefficient corresponding to the treatment variable is an asymptotically unbiased estimator of the marginal log rate ratio, even when the working model is arbitrarily misspecified. This is the log-linear analog of ANCOVA for linear models. Our results demonstrate one application of targeted maximum likelihood estimation. PMID:20628636

  1. Simple, efficient estimators of treatment effects in randomized trials using generalized linear models to leverage baseline variables.

    Science.gov (United States)

    Rosenblum, Michael; van der Laan, Mark J

    2010-04-01

    Models, such as logistic regression and Poisson regression models, are often used to estimate treatment effects in randomized trials. These models leverage information in variables collected before randomization, in order to obtain more precise estimates of treatment effects. However, there is the danger that model misspecification will lead to bias. We show that certain easy to compute, model-based estimators are asymptotically unbiased even when the working model used is arbitrarily misspecified. Furthermore, these estimators are locally efficient. As a special case of our main result, we consider a simple Poisson working model containing only main terms; in this case, we prove the maximum likelihood estimate of the coefficient corresponding to the treatment variable is an asymptotically unbiased estimator of the marginal log rate ratio, even when the working model is arbitrarily misspecified. This is the log-linear analog of ANCOVA for linear models. Our results demonstrate one application of targeted maximum likelihood estimation.

  2. ξ common cause failure model and method for defense effectiveness estimation

    International Nuclear Information System (INIS)

    Li Zhaohuan

    1991-08-01

    Two issues have been dealt. One is to develop an event based parametric model called ξ-CCF model. Its parameters are expressed in the fraction of the progressive multiplicities of failure events. By these expressions, the contribution of each multiple failure can be presented more clearly. It can help to select defense tactics against common cause failures. The other is to provide a method which is based on the operational experience and engineering judgement to estimate the effectiveness of defense tactics. It is expressed in terms of reduction matrix for a given tactics on a specific plant in the event by event form. The application of practical example shows that the model in cooperation with the method can simply estimate the effectiveness of defense tactics. It can be easily used by the operators and its application may be extended

  3. Effects of sample size on estimates of population growth rates calculated with matrix models.

    Directory of Open Access Journals (Sweden)

    Ian J Fiske

    Full Text Available BACKGROUND: Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. METHODOLOGY/PRINCIPAL FINDINGS: Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. CONCLUSIONS/SIGNIFICANCE: We found significant bias at small sample sizes when survival was low (survival = 0.5, and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high

  4. Effects of sample size on estimates of population growth rates calculated with matrix models.

    Science.gov (United States)

    Fiske, Ian J; Bruna, Emilio M; Bolker, Benjamin M

    2008-08-28

    Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda) calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. We found significant bias at small sample sizes when survival was low (survival = 0.5), and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high elasticities.

  5. A kinematic model to estimate effective dose of radioactive substances in a human body

    Science.gov (United States)

    Sasaki, S.; Yamada, T.

    2013-05-01

    The great earthquake occurred in the north-east area in Japan in March 11, 2011. Facility system to control Fukushima Daiichi nuclear power station was completely destroyed by the following giant tsunami. From the damaged reactor containment vessels, an amount of radioactive substances had leaked and diffused in the vicinity of this station. Radiological internal exposure became a serious social issue both in Japan and all over the world. The present study provides an easily understandable, kinematic-based model to estimate the effective dose of radioactive substances in a human body by simplifying the complicated mechanism of metabolism. International Commission on Radiological Protection (ICRP) has developed a sophisticated model, which is well-known as a standard method to calculate the effective dose for radiological protection. However, owing to that ICRP method is fine, it is rather difficult for non-professional people of radiology to gasp the whole images of the movement and the influences of radioactive substances in a human body. Therefore, in the present paper we propose a newly-derived and easily-understandable model to estimate the effective dose. The present method is very similar with the traditional and conventional tank model in hydrology. Ingestion flux of radioactive substances corresponds to rain intensity and the storage of radioactive substances to the water storage in a basin in runoff analysis. The key of the present method is to estimate the energy radiated in the radioactive nuclear disintegration of an atom by using classical theory of β decay and special relativity for various kinds of radioactive atoms. The parameters used in this model are only physical half-time and biological half-time, and there are no operational parameters or coefficients to adjust our theoretical runoff to ICRP. Figure shows the time-varying effective dose with ingestion duration, and we can confirm the validity of our model. The time-varying effective dose with

  6. More Precise Estimation of Lower-Level Interaction Effects in Multilevel Models.

    Science.gov (United States)

    Loeys, Tom; Josephy, Haeike; Dewitte, Marieke

    2018-01-01

    In hierarchical data, the effect of a lower-level predictor on a lower-level outcome may often be confounded by an (un)measured upper-level factor. When such confounding is left unaddressed, the effect of the lower-level predictor is estimated with bias. Separating this effect into a within- and between-component removes such bias in a linear random intercept model under a specific set of assumptions for the confounder. When the effect of the lower-level predictor is additionally moderated by another lower-level predictor, an interaction between both lower-level predictors is included into the model. To address unmeasured upper-level confounding, this interaction term ought to be decomposed into a within- and between-component as well. This can be achieved by first multiplying both predictors and centering that product term next, or vice versa. We show that while both approaches, on average, yield the same estimates of the interaction effect in linear models, the former decomposition is much more precise and robust against misspecification of the effects of cross-level and upper-level terms, compared to the latter.

  7. Infusion and sampling site effects on two-pool model estimates of leucine metabolism

    International Nuclear Information System (INIS)

    Helland, S.J.; Grisdale-Helland, B.; Nissen, S.

    1988-01-01

    To assess the effect of site of isotope infusion on estimates of leucine metabolism infusions of alpha-[4,5-3H]ketoisocaproate (KIC) and [U- 14 C]leucine were made into the left or right ventricles of sheep and pigs. Blood was sampled from the opposite ventricle. In both species, left ventricular infusions resulted in significantly lower specific radioactivities (SA) of [ 14 C]leucine and [ 3 H]KIC. [ 14 C]KIC SA was found to be insensitive to infusion and sampling sites. [ 14 C]KIC was in addition found to be equal to the SA of [ 14 C]leucine only during the left heart infusions. Therefore, [ 14 C]KIC SA was used as the only estimate for [ 14 C]SA in the equations for the two-pool model. This model eliminated the influence of site of infusion and blood sampling on the estimates for leucine entry and reduced the impact on the estimates for proteolysis and oxidation. This two-pool model could not compensate for the underestimation of transamination reactions occurring during the traditional venous isotope infusion and arterial blood sampling

  8. A reduced theoretical model for estimating condensation effects in combustion-heated hypersonic tunnel

    Science.gov (United States)

    Lin, L.; Luo, X.; Qin, F.; Yang, J.

    2018-03-01

    As one of the combustion products of hydrocarbon fuels in a combustion-heated wind tunnel, water vapor may condense during the rapid expansion process, which will lead to a complex two-phase flow inside the wind tunnel and even change the design flow conditions at the nozzle exit. The coupling of the phase transition and the compressible flow makes the estimation of the condensation effects in such wind tunnels very difficult and time-consuming. In this work, a reduced theoretical model is developed to approximately compute the nozzle-exit conditions of a flow including real-gas and homogeneous condensation effects. Specifically, the conservation equations of the axisymmetric flow are first approximated in the quasi-one-dimensional way. Then, the complex process is split into two steps, i.e., a real-gas nozzle flow but excluding condensation, resulting in supersaturated nozzle-exit conditions, and a discontinuous jump at the end of the nozzle from the supersaturated state to a saturated state. Compared with two-dimensional numerical simulations implemented with a detailed condensation model, the reduced model predicts the flow parameters with good accuracy except for some deviations caused by the two-dimensional effect. Therefore, this reduced theoretical model can provide a fast, simple but also accurate estimation of the condensation effect in combustion-heated hypersonic tunnels.

  9. Dose-stochastic radiobiological effect relationship in model of two reactions and estimation of radiation risk

    International Nuclear Information System (INIS)

    Komochkov, M.M.

    1997-01-01

    The model of dose-stochastic effect relationship for biological systems capable of self-defence under danger factor effect is developed. A defence system is realized in two forms of organism reaction, which determine innate μ n and adaptive μ a radiosensitivities. The significances of μ n are determined by host (inner) factors; and the significances of μ a , by external factors. The possibilities of adaptive reaction are determined by the coefficient of capabilities of the defence system. The formulas of the dose-effect relationship are the solutions of differential equations of assumed process in the defence system of organism. The model and formulas have been checked both at cell and at human levels. Based on the model and personal monitoring data, the estimation of radiation risk at the Joint Institute for Nuclear Research is done

  10. Application of a multistate model to estimate culvert effects on movement of small fishes

    Science.gov (United States)

    Norman, J.R.; Hagler, M.M.; Freeman, Mary C.; Freeman, B.J.

    2009-01-01

    While it is widely acknowledged that culverted road-stream crossings may impede fish passage, effects of culverts on movement of nongame and small-bodied fishes have not been extensively studied and studies generally have not accounted for spatial variation in capture probabilities. We estimated probabilities for upstream and downstream movement of small (30-120 mm standard length) benthic and water column fishes across stream reaches with and without culverts at four road-stream crossings over a 4-6-week period. Movement and reach-specific capture probabilities were estimated using multistate capture-recapture models. Although none of the culverts were complete barriers to passage, only a bottomless-box culvert appeared to permit unrestricted upstream and downstream movements by benthic fishes based on model estimates of movement probabilities. At two box culverts that were perched above the water surface at base flow, observed movements were limited to water column fishes and to intervals when runoff from storm events raised water levels above the perched level. Only a single fish was observed to move through a partially embedded pipe culvert. Estimates for probabilities of movement over distances equal to at least the length of one culvert were low (e.g., generally ???0.03, estimated for 1-2-week intervals) and had wide 95% confidence intervals as a consequence of few observed movements to nonadjacent reaches. Estimates of capture probabilities varied among reaches by a factor of 2 to over 10, illustrating the importance of accounting for spatially variable capture rates when estimating movement probabilities with capture-recapture data. Longer-term studies are needed to evaluate temporal variability in stream fish passage at culverts (e.g., in relation to streamflow variability) and to thereby better quantify the degree of population fragmentation caused by road-stream crossings with culverts. ?? American Fisheries Society 2009.

  11. Estimating the average treatment effect on survival based on observational data and using partly conditional modeling.

    Science.gov (United States)

    Gong, Qi; Schaubel, Douglas E

    2017-03-01

    Treatments are frequently evaluated in terms of their effect on patient survival. In settings where randomization of treatment is not feasible, observational data are employed, necessitating correction for covariate imbalances. Treatments are usually compared using a hazard ratio. Most existing methods which quantify the treatment effect through the survival function are applicable to treatments assigned at time 0. In the data structure of our interest, subjects typically begin follow-up untreated; time-until-treatment, and the pretreatment death hazard are both heavily influenced by longitudinal covariates; and subjects may experience periods of treatment ineligibility. We propose semiparametric methods for estimating the average difference in restricted mean survival time attributable to a time-dependent treatment, the average effect of treatment among the treated, under current treatment assignment patterns. The pre- and posttreatment models are partly conditional, in that they use the covariate history up to the time of treatment. The pre-treatment model is estimated through recently developed landmark analysis methods. For each treated patient, fitted pre- and posttreatment survival curves are projected out, then averaged in a manner which accounts for the censoring of treatment times. Asymptotic properties are derived and evaluated through simulation. The proposed methods are applied to liver transplant data in order to estimate the effect of liver transplantation on survival among transplant recipients under current practice patterns. © 2016, The International Biometric Society.

  12. Effect of recent observations on Asian CO2 flux estimates by transport model inversions

    International Nuclear Information System (INIS)

    Maksyutov, Shamil; Patra, Prabir K.; Machida, Toshinobu; Mukai, Hitoshi; Nakazawa, Takakiyo; Inoue, Gen

    2003-01-01

    We use an inverse model to evaluate the effects of the recent CO 2 observations over Asia on estimates of regional CO 2 sources and sinks. Global CO 2 flux distribution is evaluated using several atmospheric transport models, atmospheric CO 2 observations and a 'time-independent' inversion procedure adopted in the basic synthesis inversion by the Transcom-3 inverse model intercomparison project. In our analysis we include airborne and tower observations in Siberia, continuous monitoring and airborne observations over Japan, and airborne monitoring on regular flights on Tokyo-Sydney route. The inclusion of the new data reduces the uncertainty of the estimated regional CO 2 fluxes for Boreal Asia (Siberia), Temperate Asia and South-East Asia. The largest effect is observed for the emission/sink estimate for the Boreal Asia region, where introducing the observations in Siberia reduces the source uncertainty by almost half. It also produces an uncertainty reduction for Boreal North America. Addition of the Siberian airborne observations leads to projecting extra sinks in Boreal Asia of 0.2 Pg C/yr, and a smaller change for Europe. The Tokyo-Sydney observations reduce and constrain the Southeast Asian source

  13. The Additive Risk Model for Estimation of Effect of Haplotype Match in BMT Studies

    DEFF Research Database (Denmark)

    Scheike, Thomas; Martinussen, T; Zhang, MJ

    2011-01-01

    leads to a missing data problem. We show how Aalen's additive risk model can be applied in this setting with the benefit that the time-varying haplomatch effect can be easily studied. This problem has not been considered before, and the standard approach where one would use the expected-maximization (EM......) algorithm cannot be applied for this model because the likelihood is hard to evaluate without additional assumptions. We suggest an approach based on multivariate estimating equations that are solved using a recursive structure. This approach leads to an estimator where the large sample properties can...... be developed using product-integration theory. Small sample properties are investigated using simulations in a setting that mimics the motivating haplomatch problem....

  14. Estimation of direct effects for survival data by using the Aalen additive hazards model

    DEFF Research Database (Denmark)

    Martinussen, Torben; Vansteelandt, Stijn; Gerster, Mette

    2011-01-01

    We extend the definition of the controlled direct effect of a point exposure on a survival outcome, other than through some given, time-fixed intermediate variable, to the additive hazard scale. We propose two-stage estimators for this effect when the exposure is dichotomous and randomly assigned...... Aalen's additive regression for the event time, given exposure, intermediate variable and confounders. The second stage involves applying Aalen's additive model, given the exposure alone, to a modified stochastic process (i.e. a modification of the observed counting process based on the first...

  15. A classical regression framework for mediation analysis: fitting one model to estimate mediation effects.

    Science.gov (United States)

    Saunders, Christina T; Blume, Jeffrey D

    2017-10-26

    Mediation analysis explores the degree to which an exposure's effect on an outcome is diverted through a mediating variable. We describe a classical regression framework for conducting mediation analyses in which estimates of causal mediation effects and their variance are obtained from the fit of a single regression model. The vector of changes in exposure pathway coefficients, which we named the essential mediation components (EMCs), is used to estimate standard causal mediation effects. Because these effects are often simple functions of the EMCs, an analytical expression for their model-based variance follows directly. Given this formula, it is instructive to revisit the performance of routinely used variance approximations (e.g., delta method and resampling methods). Requiring the fit of only one model reduces the computation time required for complex mediation analyses and permits the use of a rich suite of regression tools that are not easily implemented on a system of three equations, as would be required in the Baron-Kenny framework. Using data from the BRAIN-ICU study, we provide examples to illustrate the advantages of this framework and compare it with the existing approaches. © The Author 2017. Published by Oxford University Press.

  16. Genetic Algorithms for Estimating Effective Parameters in a Lumped Reactor Model for Reactivity Predictions

    International Nuclear Information System (INIS)

    Marseguerra, Marzio; Zio, Enrico

    2001-01-01

    The control system of a reactor should be able to predict, in real time, the amount of reactivity to be inserted (e.g., by control rod movements and boron injection and dilution) to respond to a given electrical load demand or to undesired, accidental transients. The real-time constraint renders impractical the use of a large, detailed dynamic reactor code. One has, then, to resort to simplified analytical models with lumped effective parameters suitably estimated from the reactor data.The simple and well-known Chernick model for describing the reactor power evolution in the presence of xenon is considered and the feasibility of using genetic algorithms for estimating the effective nuclear parameters involved and the initial nonmeasurable xenon and iodine conditions is investigated. This approach has the advantage of counterbalancing the inherent model simplicity with the periodic reestimation of the effective parameter values pertaining to each reactor on the basis of its recent history. By so doing, other effects, such as burnup, are automatically taken into account

  17. Estimating the effect of childhood socioeconomic disadvantage on oral cancer in India using marginal structural models.

    Science.gov (United States)

    Krishna Rao, Sreevidya; Mejia, Gloria C; Roberts-Thomson, Kaye; Logan, Richard M; Kamath, Veena; Kulkarni, Muralidhar; Mittinty, Murthy N

    2015-07-01

    Early life socioeconomic disadvantage could affect adult health directly or indirectly. To the best of our knowledge, there are no studies of the direct effect of early life socioeconomic conditions on oral cancer occurrence in adult life. We conducted a multicenter, hospital-based, case-control study in India between 2011 and 2012 on 180 histopathologically confirmed incident oral and/or oropharyngeal cancer cases, aged 18 years or more, and 272 controls that included hospital visitors, who were not diagnosed with any cancer in the same hospitals. Life-course data were collected on socioeconomic conditions, risk factors, and parental behavior through interview employing a life grid. The early life socioeconomic conditions measure was determined by occupation of the head of household in childhood. Adult socioeconomic measures included participant's education and current occupation of the head of household. Marginal structural models with stabilized inverse probability weights were used to estimate the controlled direct effects of early life socioeconomic conditions on oral cancer. The total effect model showed that those in the low socioeconomic conditions in the early years of childhood had 60% (risk ratio [RR] = 1.6 [95% confidence interval {CI} = 1.4, 1.9]) increased risk of oral cancer. From the marginal structural models, the estimated risk for developing oral cancer among those in low early life socioeconomic conditions was 50% (RR = 1.5 [95% CI = 1.4, 1.5]), 20% (RR = 1.2 [95% CI = 0.9, 1.7]), and 90% (RR = 1.9 [95% CI = 1.7, 2.2]) greater than those in the high socioeconomic conditions when controlled for smoking, chewing, and alcohol, respectively. When all the three mediators were controlled in a marginal structural model, the RR was 1.3 (95% CI = 1.0, 1.6). Early life low socioeconomic condition had a controlled direct effect on oral cancer when smoking, chewing tobacco, and alcohol were separately adjusted in marginal structural models.

  18. Effects of topographic data quality on estimates of shallow slope stability using different regolith depth models

    Science.gov (United States)

    Baum, Rex L.

    2017-01-01

    Thickness of colluvium or regolith overlying bedrock or other consolidated materials is a major factor in determining stability of unconsolidated earth materials on steep slopes. Many efforts to model spatially distributed slope stability, for example to assess susceptibility to shallow landslides, have relied on estimates of constant thickness, constant depth, or simple models of thickness (or depth) based on slope and other topographic variables. Assumptions of constant depth or thickness rarely give satisfactory results. Geomorphologists have devised a number of different models to represent the spatial variability of regolith depth and applied them to various settings. I have applied some of these models that can be implemented numerically to different study areas with different types of terrain and tested the results against available depth measurements and landslide inventories. The areas include crystalline rocks of the Colorado Front Range, and gently dipping sedimentary rocks of the Oregon Coast Range. Model performance varies with model, terrain type, and with quality of the input topographic data. Steps in contour-derived 10-m digital elevation models (DEMs) introduce significant errors into the predicted distribution of regolith and landslides. Scan lines, facets, and other artifacts further degrade DEMs and model predictions. Resampling to a lower grid-cell resolution can mitigate effects of facets in lidar DEMs of areas where dense forest severely limits ground returns. Due to its higher accuracy and ability to penetrate vegetation, lidar-derived topography produces more realistic distributions of cover and potential landslides than conventional photogrammetrically derived topographic data.

  19. An expert judgment model applied to estimating the safety effect of a bicycle facility.

    Science.gov (United States)

    Leden, L; Gårder, P; Pulkkinen, U

    2000-07-01

    This paper presents a risk index model that can be used for assessing the safety effect of countermeasures. The model estimates risk in a multiplicative way, which makes it possible to analyze the impact of different factors separately. Expert judgments are incorporated through a Bayesian error model. The variance of the risk estimate is determined by Monte-Carlo simulation. The model was applied to assess the safety effect of a new design of a bicycle crossing. The intent was to gain safety by raising the crossings to reduce vehicle speeds and by making the crossings more visible by painting them in a bright color. Before the implementations, bicyclists were riding on bicycle crossings of conventional Swedish type, i.e. similar to crosswalks but delineated by white squares rather than solid lines or zebra markings. Automobile speeds were reduced as anticipated. However, it seems as if the positive effect of this was more or less canceled out by increased bicycle speeds. The safety per bicyclist was still improved by approximately 20%. This improvement was primarily caused by an increase in bicycle flow, since the data show that more bicyclists at a given location seem to benefit their safety. The increase in bicycle flow was probably caused by the new layout of the crossings since bicyclists perceived them as safer and causing less delay. Some future development work is suggested. Pros and cons with the used methodology are discussed. The most crucial parameter to be added is probably a model describing the interaction between motorists and bicyclists, for example, how risk is influenced by the lateral position of the bicyclist in relation to the motorist. It is concluded that the interaction seems to be optimal when both groups share the roadway.

  20. Modeling estimates of the effect of acid rain on background radiation dose.

    Science.gov (United States)

    Sheppard, S C; Sheppard, M I

    1988-06-01

    Acid rain causes accelerated mobilization of many materials in soils. Natural and anthropogenic radionuclides, especially 226Ra and 137Cs, are among these materials. Okamoto is apparently the only researcher to date who has attempted to quantify the effect of acid rain on the "background" radiation dose to man. He estimated an increase in dose by a factor of 1.3 following a decrease in soil pH of 1 unit. We reviewed literature that described the effects of changes in pH on mobility and plant uptake of Ra and Cs. Generally, a decrease in soil pH by 1 unit will increase mobility and plant uptake by factors of 2 to 7. Thus, Okamoto's dose estimate may be too low. We applied several simulation models to confirm Okamoto's ideas, with most emphasis on an atmospherically driven soil model that predicts water and nuclide flow through a soil profile. We modeled a typical, acid-rain sensitive soil using meteorological data from Geraldton, Ontario. The results, within the range of effects on the soil expected from acidification, showed essentially direct proportionality between the mobility of the nuclides and dose. This supports some of the assumptions invoked by Okamoto. We conclude that a decrease in pH of 1 unit may increase the mobility of Ra and Cs by a factor of 2 or more. Our models predict that this will lead to similar increases in plant uptake and radiological dose to man. Although health effects following such a small increase in dose have not been statistically demonstrated, any increase in dose is probably undesirable.

  1. Modeling estimates of the effect of acid rain on background radiation dose

    International Nuclear Information System (INIS)

    Sheppard, S.C.; Sheppard, M.I.

    1988-01-01

    Acid rain causes accelerated mobilization of many materials in soils. Natural and anthropogenic radionuclides, especially 226Ra and 137Cs, are among these materials. Okamoto is apparently the only researcher to date who has attempted to quantify the effect of acid rain on the background radiation dose to man. He estimated an increase in dose by a factor of 1.3 following a decrease in soil pH of 1 unit. We reviewed literature that described the effects of changes in pH on mobility and plant uptake of Ra and Cs. Generally, a decrease in soil pH by 1 unit will increase mobility and plant uptake by factors of 2 to 7. Thus, Okamoto's dose estimate may be too low. We applied several simulation models to confirm Okamoto's ideas, with most emphasis on an atmospherically driven soil model that predicts water and nuclide flow through a soil profile. We modeled a typical, acid-rain sensitive soil using meteorological data from Geraldton, Ontario. The results, within the range of effects on the soil expected from acidification, showed essentially direct proportionality between the mobility of the nuclides and dose. This supports some of the assumptions invoked by Okamoto. We conclude that a decrease in pH of 1 unit may increase the mobility of Ra and Cs by a factor of 2 or more. Our models predict that this will lead to similar increases in plant uptake and radiological dose to man. Although health effects following such a small increase in dose have not been statistically demonstrated, any increase in dose is probably undesirable

  2. [Application of Mixed-effect Model in PMI Estimation by Vitreous Humor].

    Science.gov (United States)

    Yang, M Z; Li, H J; Zhang, T Y; Ding, Z J; Wu, S F; Qiu, X G; Liu, Q

    2018-02-01

    To test the changes of the potassium (K⁺) and magnesium (Mg²⁺) concentrations in vitreous humor of rabbits along with postmortem interval (PMI) under different temperatures, and explore the feasibility of PMI estimation using mixed-effect model. After sacrifice, rabbit carcasses were preserved at 5 ℃, 15 ℃, 25 ℃ and 35 ℃, and 80-100 μL of vitreous humor was collected by the double-eye alternating micro-sampling method at every 12 h. The concentrations of K⁺ and Mg²⁺ in vitreous humor were measured by a biochemical-immune analyser. The mixed-effect model was used to perform analysis and fitting, and established the equations for PMI estimation. The data detected from the samples that were stoned at 10 ℃, 20 ℃ and 30 ℃ with 20, 40 and 65 h were used to validate the equations of PMI estimation. The concentrations of K⁺ and Mg²⁺ [f( x , y )] in vitreous humor of rabbits under different temperature increased along with PMI ( x ). The relative equations of K⁺ and Mg²⁺ concentration with PMI and temperature under 5 ℃~35 ℃ were f K⁺ ( x , y )=3.413 0+0.309 2 x +0.337 6 y +0.010 83 xy -0.002 47 x ² ( P PMI estimation by K⁺ and Mg²⁺ was in 10 h when PMI was between 0 to 40 h, and the time of deviation was in 21 h when PMI was between 40 to 65 h. the ambient temperature range of 5 ℃-35 ℃, the mixed-effect model based on temperature and vitreous humor substance concentrations can provide a new method for the practical application of vitreous humor chemicals for PMI estimation. Copyright© by the Editorial Department of Journal of Forensic Medicine.

  3. A simple numerical model to estimate the effect of coal selection on pulverized fuel burnout

    Energy Technology Data Exchange (ETDEWEB)

    Sun, J.K.; Hurt, R.H.; Niksa, S.; Muzio, L.; Mehta, A.; Stallings, J. [Brown University, Providence, RI (USA). Division Engineering

    2003-06-01

    The amount of unburned carbon in ash is an important performance characteristic in commercial boilers fired with pulverized coal. Unburned carbon levels are known to be sensitive to fuel selection, and there is great interest in methods of estimating the burnout propensity of coals based on proximate and ultimate analysis - the only fuel properties readily available to utility practitioners. A simple numerical model is described that is specifically designed to estimate the effects of coal selection on burnout in a way that is useful for commercial coal screening. The model is based on a highly idealized description of the combustion chamber but employs detailed descriptions of the fundamental fuel transformations. The model is validated against data from laboratory and pilot-scale combustors burning a range of international coals, and then against data obtained from full-scale units during periods of coal switching. The validated model form is then used in a series of sensitivity studies to explore the role of various individual fuel properties that influence burnout.

  4. How robust are the estimated effects of air pollution on health? Accounting for model uncertainty using Bayesian model averaging.

    Science.gov (United States)

    Pannullo, Francesca; Lee, Duncan; Waclawski, Eugene; Leyland, Alastair H

    2016-08-01

    The long-term impact of air pollution on human health can be estimated from small-area ecological studies in which the health outcome is regressed against air pollution concentrations and other covariates, such as socio-economic deprivation. Socio-economic deprivation is multi-factorial and difficult to measure, and includes aspects of income, education, and housing as well as others. However, these variables are potentially highly correlated, meaning one can either create an overall deprivation index, or use the individual characteristics, which can result in a variety of pollution-health effects. Other aspects of model choice may affect the pollution-health estimate, such as the estimation of pollution, and spatial autocorrelation model. Therefore, we propose a Bayesian model averaging approach to combine the results from multiple statistical models to produce a more robust representation of the overall pollution-health effect. We investigate the relationship between nitrogen dioxide concentrations and cardio-respiratory mortality in West Central Scotland between 2006 and 2012. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  5. Estimation of morbidity effects

    International Nuclear Information System (INIS)

    Ostro, B.

    1994-01-01

    Many researchers have related exposure to ambient air pollution to respiratory morbidity. To be included in this review and analysis, however, several criteria had to be met. First, a careful study design and a methodology that generated quantitative dose-response estimates were required. Therefore, there was a focus on time-series regression analyses relating daily incidence of morbidity to air pollution in a single city or metropolitan area. Studies that used weekly or monthly average concentrations or that involved particulate measurements in poorly characterized metropolitan areas (e.g., one monitor representing a large region) were not included in this review. Second, studies that minimized confounding ad omitted variables were included. For example, research that compared two cities or regions and characterized them as 'high' and 'low' pollution area were not included because of potential confounding by other factors in the respective areas. Third, concern for the effects of seasonality and weather had to be demonstrated. This could be accomplished by either stratifying and analyzing the data by season, by examining the independent effects of temperature and humidity, and/or by correcting the model for possible autocorrelation. A fourth criterion for study inclusion was that the study had to include a reasonably complete analysis of the data. Such analysis would include an careful exploration of the primary hypothesis as well as possible examination of te robustness and sensitivity of the results to alternative functional forms, specifications, and influential data points. When studies reported the results of these alternative analyses, the quantitative estimates that were judged as most representative of the overall findings were those that were summarized in this paper. Finally, for inclusion in the review of particulate matter, the study had to provide a measure of particle concentration that could be converted into PM10, particulate matter below 10

  6. Model Effects on GLAS-Based Regional Estimates of Forest Biomass and Carbon

    Science.gov (United States)

    Nelson, Ross

    2008-01-01

    ICESat/GLAS waveform data are used to estimate biomass and carbon on a 1.27 million sq km study area. the Province of Quebec, Canada, below treeline. The same input data sets and sampling design are used in conjunction with four different predictive models to estimate total aboveground dry forest biomass and forest carbon. The four models include nonstratified and stratified versions of a multiple linear model where either biomass or (square root of) biomass serves as the dependent variable. The use of different models in Quebec introduces differences in Provincial biomass estimates of up to 0.35 Gt (range 4.942+/-0.28 Gt to 5.29+/-0.36 Gt). The results suggest that if different predictive models are used to estimate regional carbon stocks in different epochs, e.g., y2005, y2015, one might mistakenly infer an apparent aboveground carbon "change" of, in this case, 0.18 Gt, or approximately 7% of the aboveground carbon in Quebec, due solely to the use of different predictive models. These findings argue for model consistency in future, LiDAR-based carbon monitoring programs. Regional biomass estimates from the four GLAS models are compared to ground estimates derived from an extensive network of 16,814 ground plots located in southern Quebec. Stratified models proved to be more accurate and precise than either of the two nonstratified models tested.

  7. Effects of Source RDP Models and Near-source Propagation: Implication for Seismic Yield Estimation

    Science.gov (United States)

    Saikia, C. K.; Helmberger, D. V.; Stead, R. J.; Woods, B. B.

    - It has proven difficult to uniquely untangle the source and propagation effects on the observed seismic data from underground nuclear explosions, even when large quantities of near-source, broadband data are available for analysis. This leads to uncertainties in our ability to quantify the nuclear seismic source function and, consequently the accuracy of seismic yield estimates for underground explosions. Extensive deterministic modeling analyses of the seismic data recorded from underground explosions at a variety of test sites have been conducted over the years and the results of these studies suggest that variations in the seismic source characteristics between test sites may be contributing to the observed differences in the magnitude/yield relations applicable at those sites. This contributes to our uncertainty in the determination of seismic yield estimates for explosions at previously uncalibrated test sites. In this paper we review issues involving the relationship of Nevada Test Site (NTS) source scaling laws to those at other sites. The Joint Verification Experiment (JVE) indicates that a magnitude (mb) bias (δmb) exists between the Semipalatinsk test site (STS) in the former Soviet Union (FSU) and the Nevada test site (NTS) in the United States. Generally this δmb is attributed to differential attenuation in the upper-mantle beneath the two test sites. This assumption results in rather large estimates of yield for large mb tunnel shots at Novaya Zemlya. A re-examination of the US testing experiments suggests that this δmb bias can partly be explained by anomalous NTS (Pahute) source characteristics. This interpretation is based on the modeling of US events at a number of test sites. Using a modified Haskell source description, we investigated the influence of the source Reduced Displacement Potential (RDP) parameters ψ ∞ , K and B by fitting short- and long-period data simultaneously, including the near-field body and surface waves. In general

  8. A structural dynamic factor model for the effects of monetary policy estimated by the EM algorithm

    DEFF Research Database (Denmark)

    Bork, Lasse

    This paper applies the maximum likelihood based EM algorithm to a large-dimensional factor analysis of US monetary policy. Specifically, economy-wide effects of shocks to the US federal funds rate are estimated in a structural dynamic factor model in which 100+ US macroeconomic and financial time...... series are driven by the joint dynamics of the federal funds rate and a few correlated dynamic factors. This paper contains a number of methodological contributions to the existing literature on data-rich monetary policy analysis. Firstly, the identification scheme allows for correlated factor dynamics...... as opposed to the orthogonal factors resulting from the popular principal component approach to structural factor models. Correlated factors are economically more sensible and important for a richer monetary policy transmission mechanism. Secondly, I consider both static factor loadings as well as dynamic...

  9. Benefits of dominance over additive models for the estimation of average effects in the presence of dominance

    NARCIS (Netherlands)

    Duenk, Pascal; Calus, Mario P.L.; Wientjes, Yvonne C.J.; Bijma, Piter

    2017-01-01

    In quantitative genetics, the average effect at a single locus can be estimated by an additive (A) model, or an additive plus dominance (AD) model. In the presence of dominance, the AD-model is expected to be more accurate, because the A-model falsely assumes that residuals are independent and

  10. A model to estimate the cost effectiveness of the indoorenvironment improvements in office work

    Energy Technology Data Exchange (ETDEWEB)

    Seppanen, Olli; Fisk, William J.

    2004-06-01

    Deteriorated indoor climate is commonly related to increases in sick building syndrome symptoms, respiratory illnesses, sick leave, reduced comfort and losses in productivity. The cost of deteriorated indoor climate for the society is high. Some calculations show that the cost is higher than the heating energy costs of the same buildings. Also building-level calculations have shown that many measures taken to improve indoor air quality and climate are cost-effective when the potential monetary savings resulting from an improved indoor climate are included as benefits gained. As an initial step towards systemizing these building level calculations we have developed a conceptual model to estimate the cost-effectiveness of various measures. The model shows the links between the improvements in the indoor environment and the following potential financial benefits: reduced medical care cost, reduced sick leave, better performance of work, lower turn over of employees, and lower cost of building maintenance due to fewer complaints about indoor air quality and climate. The pathways to these potential benefits from changes in building technology and practices go via several human responses to the indoor environment such as infectious diseases, allergies and asthma, sick building syndrome symptoms, perceived air quality, and thermal environment. The model also includes the annual cost of investments, operation costs, and cost savings of improved indoor climate. The conceptual model illustrates how various factors are linked to each other. SBS symptoms are probably the most commonly assessed health responses in IEQ studies and have been linked to several characteristics of buildings and IEQ. While the available evidence indicates that SBS symptoms can affect these outcomes and suspects that such a linkage exists, at present we can not quantify the relationships sufficiently for cost-benefit modeling. New research and analyses of existing data to quantify the financial

  11. The Biasing Effects of Unmodeled ARMA Time Series Processes on Latent Growth Curve Model Estimates

    Science.gov (United States)

    Sivo, Stephen; Fan, Xitao; Witta, Lea

    2005-01-01

    The purpose of this study was to evaluate the robustness of estimated growth curve models when there is stationary autocorrelation among manifest variable errors. The results suggest that when, in practice, growth curve models are fitted to longitudinal data, alternative rival hypotheses to consider would include growth models that also specify…

  12. Note on an Identity Between Two Unbiased Variance Estimators for the Grand Mean in a Simple Random Effects Model.

    Science.gov (United States)

    Levin, Bruce; Leu, Cheng-Shiun

    2013-01-01

    We demonstrate the algebraic equivalence of two unbiased variance estimators for the sample grand mean in a random sample of subjects from an infinite population where subjects provide repeated observations following a homoscedastic random effects model.

  13. Abstract: Inference and Interval Estimation for Indirect Effects With Latent Variable Models.

    Science.gov (United States)

    Falk, Carl F; Biesanz, Jeremy C

    2011-11-30

    Models specifying indirect effects (or mediation) and structural equation modeling are both popular in the social sciences. Yet relatively little research has compared methods that test for indirect effects among latent variables and provided precise estimates of the effectiveness of different methods. This simulation study provides an extensive comparison of methods for constructing confidence intervals and for making inferences about indirect effects with latent variables. We compared the percentile (PC) bootstrap, bias-corrected (BC) bootstrap, bias-corrected accelerated (BC a ) bootstrap, likelihood-based confidence intervals (Neale & Miller, 1997), partial posterior predictive (Biesanz, Falk, and Savalei, 2010), and joint significance tests based on Wald tests or likelihood ratio tests. All models included three reflective latent variables representing the independent, dependent, and mediating variables. The design included the following fully crossed conditions: (a) sample size: 100, 200, and 500; (b) number of indicators per latent variable: 3 versus 5; (c) reliability per set of indicators: .7 versus .9; (d) and 16 different path combinations for the indirect effect (α = 0, .14, .39, or .59; and β = 0, .14, .39, or .59). Simulations were performed using a WestGrid cluster of 1680 3.06GHz Intel Xeon processors running R and OpenMx. Results based on 1,000 replications per cell and 2,000 resamples per bootstrap method indicated that the BC and BC a bootstrap methods have inflated Type I error rates. Likelihood-based confidence intervals and the PC bootstrap emerged as methods that adequately control Type I error and have good coverage rates.

  14. The effect of coupling hydrologic and hydrodynamic models on probable maximum flood estimation

    Science.gov (United States)

    Felder, Guido; Zischg, Andreas; Weingartner, Rolf

    2017-07-01

    Deterministic rainfall-runoff modelling usually assumes stationary hydrological system, as model parameters are calibrated with and therefore dependant on observed data. However, runoff processes are probably not stationary in the case of a probable maximum flood (PMF) where discharge greatly exceeds observed flood peaks. Developing hydrodynamic models and using them to build coupled hydrologic-hydrodynamic models can potentially improve the plausibility of PMF estimations. This study aims to assess the potential benefits and constraints of coupled modelling compared to standard deterministic hydrologic modelling when it comes to PMF estimation. The two modelling approaches are applied using a set of 100 spatio-temporal probable maximum precipitation (PMP) distribution scenarios. The resulting hydrographs, the resulting peak discharges as well as the reliability and the plausibility of the estimates are evaluated. The discussion of the results shows that coupling hydrologic and hydrodynamic models substantially improves the physical plausibility of PMF modelling, although both modelling approaches lead to PMF estimations for the catchment outlet that fall within a similar range. Using a coupled model is particularly suggested in cases where considerable flood-prone areas are situated within a catchment.

  15. Model Effects on GLAS-Based Regional Estimates of Forest Biomass and Carbon

    Science.gov (United States)

    Nelson, Ross F.

    2010-01-01

    Ice, Cloud, and land Elevation Satellite (ICESat) / Geosciences Laser Altimeter System (GLAS) waveform data are used to estimate biomass and carbon on a 1.27 X 10(exp 6) square km study area in the Province of Quebec, Canada, below the tree line. The same input datasets and sampling design are used in conjunction with four different predictive models to estimate total aboveground dry forest biomass and forest carbon. The four models include non-stratified and stratified versions of a multiple linear model where either biomass or (biomass)(exp 0.5) serves as the dependent variable. The use of different models in Quebec introduces differences in Provincial dry biomass estimates of up to 0.35 G, with a range of 4.94 +/- 0.28 Gt to 5.29 +/-0.36 Gt. The differences among model estimates are statistically non-significant, however, and the results demonstrate the degree to which carbon estimates vary strictly as a function of the model used to estimate regional biomass. Results also indicate that GLAS measurements become problematic with respect to height and biomass retrievals in the boreal forest when biomass values fall below 20 t/ha and when GLAS 75th percentile heights fall below 7 m.

  16. Improving observational study estimates of treatment effects using joint modeling of selection effects and outcomes: the case of AAA repair.

    Science.gov (United States)

    O'Malley, A James; Cotterill, Philip; Schermerhorn, Marc L; Landon, Bruce E

    2011-12-01

    When 2 treatment approaches are available, there are likely to be unmeasured confounders that influence choice of procedure, which complicates estimation of the causal effect of treatment on outcomes using observational data. To estimate the effect of endovascular (endo) versus open surgical (open) repair, including possible modification by institutional volume, on survival after treatment for abdominal aortic aneurysm, accounting for observed and unobserved confounding variables. Observational study of data from the Medicare program using a joint model of treatment selection and survival given treatment to estimate the effects of type of surgery and institutional volume on survival. We studied 61,414 eligible repairs of intact abdominal aortic aneurysms during 2001 to 2004. The outcome, perioperative death, is defined as in-hospital death or death within 30 days of operation. The key predictors are use of endo, transformed endo and open volume, and endo-volume interactions. There is strong evidence of nonrandom selection of treatment with potential confounding variables including institutional volume and procedure date, variables not typically adjusted for in clinical trials. The best fitting model included heterogeneous transformations of endo volume for endo cases and open volume for open cases as predictors. Consistent with our hypothesis, accounting for unmeasured selection reduced the mortality benefit of endo. The effect of endo versus open surgery varies nonlinearly with endo and open volume. Accounting for institutional experience and unmeasured selection enables better decision-making by physicians making treatment referrals, investigators evaluating treatments, and policy makers.

  17. Letter to the Editor: Applications Air Q Model on Estimate Health Effects Exposure to Air Pollutants

    Directory of Open Access Journals (Sweden)

    Gholamreza Goudarzi

    2016-02-01

    Full Text Available Epidemiologic studies in worldwide have measured increases in mortality and morbidity associated with air pollution (1-3. Quantifying the effects of air pollution on the human health in urban area causes an increasingly critical component in policy discussion (4-6. Air Q model was proved to be a valid and reliable tool to predicts health effects related to criteria  pollutants (particulate matter (PM, ozone (O3, nitrogen dioxide (NO2, sulfur dioxide (SO2, and carbon monoxide (CO, determinate  the  potential short term effects of air pollution  and allows the examination of various scenarios in which emission rates of pollutants are varied (7,8. Air Q software provided by the WHO European Centre for Environment and Health (ECEH (9. Air Q model is based on cohort studies and used to estimates of both attributable average reductions in life-span and numbers of mortality and morbidity associated with exposure to air pollution (10,11. Applications

  18. A New Form of Nondestructive Strength-Estimating Statistical Models Accounting for Uncertainty of Model and Aging Effect of Concrete

    International Nuclear Information System (INIS)

    Hong, Kee Jeung; Kim, Jee Sang

    2009-01-01

    As concrete ages, the surrounding environment is expected to have growing influences on the concrete. As all the impacts of the environment cannot be considered in the strength-estimating model of a nondestructive concrete test, the increase in concrete age leads to growing uncertainty in the strength-estimating model. Therefore, the variation of the model error increases. It is necessary to include those impacts in the probability model of concrete strength attained from the nondestructive tests so as to build a more accurate reliability model for structural performance evaluation. This paper reviews and categorizes the existing strength-estimating statistical models of nondestructive concrete test, and suggests a new form of the strength-estimating statistical models to properly reflect the model uncertainty due to aging of the concrete. This new form of the statistical models will lay foundation for more accurate structural performance evaluation.

  19. Estimated allele substitution effects underlying genomic evaluation models depend on the scaling of allele counts

    NARCIS (Netherlands)

    Bouwman, Aniek C.; Hayes, Ben J.; Calus, Mario P.L.

    2017-01-01

    Background: Genomic evaluation is used to predict direct genomic values (DGV) for selection candidates in breeding programs, but also to estimate allele substitution effects (ASE) of single nucleotide polymorphisms (SNPs). Scaling of allele counts influences the estimated ASE, because scaling of

  20. Application of a simple analytical model to estimate effectiveness of radiation shielding for neutrons

    International Nuclear Information System (INIS)

    Frankle, S.C.; Fitzgerald, D.H.; Hutson, R.L.; Macek, R.J.; Wilkinson, C.A.

    1993-01-01

    Neutron dose equivalent rates have been measured for 800-MeV proton beam spills at the Los Alamos Meson Physics Facility. Neutron detectors were used to measure the neutron dose levels at a number of locations for each beam-spill test, and neutron energy spectra were measured for several beam-spill tests. Estimates of expected levels for various detector locations were made using a simple analytical model developed for 800-MeV proton beam spills. A comparison of measurements and model estimates indicates that the model is reasonably accurate in estimating the neutron dose equivalent rate for simple shielding geometries. The model fails for more complicated shielding geometries, where indirect contributions to the dose equivalent rate can dominate

  1. Effects of model complexity and priors on estimation using sequential importance sampling/resampling for species conservation

    Science.gov (United States)

    Dunham, Kylee; Grand, James B.

    2016-01-01

    We examined the effects of complexity and priors on the accuracy of models used to estimate ecological and observational processes, and to make predictions regarding population size and structure. State-space models are useful for estimating complex, unobservable population processes and making predictions about future populations based on limited data. To better understand the utility of state space models in evaluating population dynamics, we used them in a Bayesian framework and compared the accuracy of models with differing complexity, with and without informative priors using sequential importance sampling/resampling (SISR). Count data were simulated for 25 years using known parameters and observation process for each model. We used kernel smoothing to reduce the effect of particle depletion, which is common when estimating both states and parameters with SISR. Models using informative priors estimated parameter values and population size with greater accuracy than their non-informative counterparts. While the estimates of population size and trend did not suffer greatly in models using non-informative priors, the algorithm was unable to accurately estimate demographic parameters. This model framework provides reasonable estimates of population size when little to no information is available; however, when information on some vital rates is available, SISR can be used to obtain more precise estimates of population size and process. Incorporating model complexity such as that required by structured populations with stage-specific vital rates affects precision and accuracy when estimating latent population variables and predicting population dynamics. These results are important to consider when designing monitoring programs and conservation efforts requiring management of specific population segments.

  2. Benefits of Dominance over Additive Models for the Estimation of Average Effects in the Presence of Dominance

    Directory of Open Access Journals (Sweden)

    Pascal Duenk

    2017-10-01

    Full Text Available In quantitative genetics, the average effect at a single locus can be estimated by an additive (A model, or an additive plus dominance (AD model. In the presence of dominance, the AD-model is expected to be more accurate, because the A-model falsely assumes that residuals are independent and identically distributed. Our objective was to investigate the accuracy of an estimated average effect (α^ in the presence of dominance, using either a single locus A-model or AD-model. Estimation was based on a finite sample from a large population in Hardy-Weinberg equilibrium (HWE, and the root mean squared error of α^ was calculated for several broad-sense heritabilities, sample sizes, and sizes of the dominance effect. Results show that with the A-model, both sampling deviations of genotype frequencies from HWE frequencies and sampling deviations of allele frequencies contributed to the error. With the AD-model, only sampling deviations of allele frequencies contributed to the error, provided that all three genotype classes were sampled. In the presence of dominance, the root mean squared error of α^ with the AD-model was always smaller than with the A-model, even when the heritability was less than one. Remarkably, in the absence of dominance, there was no disadvantage of fitting dominance. In conclusion, the AD-model yields more accurate estimates of average effects from a finite sample, because it is more robust against sampling deviations from HWE frequencies than the A-model. Genetic models that include dominance, therefore, yield higher accuracies of estimated average effects than purely additive models when dominance is present.

  3. Estimating the effect of a variable in a high-dimensional regression model

    DEFF Research Database (Denmark)

    Jensen, Peter Sandholt; Wurtz, Allan

    assume that the effect is identified in a high-dimensional linear model specified by unconditional moment restrictions. We consider  properties of the following methods, which rely on lowdimensional models to infer the effect: Extreme bounds analysis, the minimum t-statistic over models, Sala...

  4. Effect of Estimated Daily Global Solar Radiation Data on the Results of Crop Growth Models

    Directory of Open Access Journals (Sweden)

    Herbert Formayer

    2007-10-01

    Full Text Available The results of previous studies have suggested that estimated daily globalradiation (RG values contain an error that could compromise the precision of subsequentcrop model applications. The following study presents a detailed site and spatial analysis ofthe RG error propagation in CERES and WOFOST crop growth models in Central Europeanclimate conditions. The research was conducted i at the eight individual sites in Austria andthe Czech Republic where measured daily RG values were available as a reference, withseven methods for RG estimation being tested, and ii for the agricultural areas of the CzechRepublic using daily data from 52 weather stations, with five RG estimation methods. In thelatter case the RG values estimated from the hours of sunshine using the ångström-Prescottformula were used as the standard method because of the lack of measured RG data. At thesite level we found that even the use of methods based on hours of sunshine, which showedthe lowest bias in RG estimates, led to a significant distortion of the key crop model outputs.When the ångström-Prescott method was used to estimate RG, for example, deviationsgreater than ±10 per cent in winter wheat and spring barley yields were noted in 5 to 6 percent of cases. The precision of the yield estimates and other crop model outputs was lowerwhen RG estimates based on the diurnal temperature range and cloud cover were used (mean bias error 2.0 to 4.1 per cent. The methods for estimating RG from the diurnal temperature range produced a wheat yield bias of more than 25 per cent in 12 to 16 per cent of the seasons. Such uncertainty in the crop model outputs makes the reliability of any seasonal yield forecasts or climate change impact assessments questionable if they are based on this type of data. The spatial assessment of the RG data uncertainty propagation over the winter wheat yields also revealed significant differences within the study area. We

  5. Adjustment of Measurements with Multiplicative Errors: Error Analysis, Estimates of the Variance of Unit Weight, and Effect on Volume Estimation from LiDAR-Type Digital Elevation Models

    Directory of Open Access Journals (Sweden)

    Yun Shi

    2014-01-01

    Full Text Available Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM.

  6. Estimated Effects of Different Alcohol Taxation and Price Policies on Health Inequalities: A Mathematical Modelling Study.

    Directory of Open Access Journals (Sweden)

    Petra S Meier

    2016-02-01

    Full Text Available While evidence that alcohol pricing policies reduce alcohol-related health harm is robust, and alcohol taxation increases are a WHO "best buy" intervention, there is a lack of research comparing the scale and distribution across society of health impacts arising from alternative tax and price policy options. The aim of this study is to test whether four common alcohol taxation and pricing strategies differ in their impact on health inequalities.An econometric epidemiological model was built with England 2014/2015 as the setting. Four pricing strategies implemented on top of the current tax were equalised to give the same 4.3% population-wide reduction in total alcohol-related mortality: current tax increase, a 13.4% all-product duty increase under the current UK system; a value-based tax, a 4.0% ad valorem tax based on product price; a strength-based tax, a volumetric tax of £0.22 per UK alcohol unit (= 8 g of ethanol; and minimum unit pricing, a minimum price threshold of £0.50 per unit, below which alcohol cannot be sold. Model inputs were calculated by combining data from representative household surveys on alcohol purchasing and consumption, administrative and healthcare data on 43 alcohol-attributable diseases, and published price elasticities and relative risk functions. Outcomes were annual per capita consumption, consumer spending, and alcohol-related deaths. Uncertainty was assessed via partial probabilistic sensitivity analysis (PSA and scenario analysis. The pricing strategies differ as to how effects are distributed across the population, and, from a public health perspective, heavy drinkers in routine/manual occupations are a key group as they are at greatest risk of health harm from their drinking. Strength-based taxation and minimum unit pricing would have greater effects on mortality among drinkers in routine/manual occupations (particularly for heavy drinkers, where the estimated policy effects on mortality rates are as

  7. Estimated Effects of Different Alcohol Taxation and Price Policies on Health Inequalities: A Mathematical Modelling Study

    Science.gov (United States)

    Meier, Petra S.; Holmes, John; Angus, Colin; Ally, Abdallah K.; Meng, Yang; Brennan, Alan

    2016-01-01

    Introduction While evidence that alcohol pricing policies reduce alcohol-related health harm is robust, and alcohol taxation increases are a WHO “best buy” intervention, there is a lack of research comparing the scale and distribution across society of health impacts arising from alternative tax and price policy options. The aim of this study is to test whether four common alcohol taxation and pricing strategies differ in their impact on health inequalities. Methods and Findings An econometric epidemiological model was built with England 2014/2015 as the setting. Four pricing strategies implemented on top of the current tax were equalised to give the same 4.3% population-wide reduction in total alcohol-related mortality: current tax increase, a 13.4% all-product duty increase under the current UK system; a value-based tax, a 4.0% ad valorem tax based on product price; a strength-based tax, a volumetric tax of £0.22 per UK alcohol unit (= 8 g of ethanol); and minimum unit pricing, a minimum price threshold of £0.50 per unit, below which alcohol cannot be sold. Model inputs were calculated by combining data from representative household surveys on alcohol purchasing and consumption, administrative and healthcare data on 43 alcohol-attributable diseases, and published price elasticities and relative risk functions. Outcomes were annual per capita consumption, consumer spending, and alcohol-related deaths. Uncertainty was assessed via partial probabilistic sensitivity analysis (PSA) and scenario analysis. The pricing strategies differ as to how effects are distributed across the population, and, from a public health perspective, heavy drinkers in routine/manual occupations are a key group as they are at greatest risk of health harm from their drinking. Strength-based taxation and minimum unit pricing would have greater effects on mortality among drinkers in routine/manual occupations (particularly for heavy drinkers, where the estimated policy effects on

  8. Estimated Effects of Different Alcohol Taxation and Price Policies on Health Inequalities: A Mathematical Modelling Study.

    Science.gov (United States)

    Meier, Petra S; Holmes, John; Angus, Colin; Ally, Abdallah K; Meng, Yang; Brennan, Alan

    2016-02-01

    While evidence that alcohol pricing policies reduce alcohol-related health harm is robust, and alcohol taxation increases are a WHO "best buy" intervention, there is a lack of research comparing the scale and distribution across society of health impacts arising from alternative tax and price policy options. The aim of this study is to test whether four common alcohol taxation and pricing strategies differ in their impact on health inequalities. An econometric epidemiological model was built with England 2014/2015 as the setting. Four pricing strategies implemented on top of the current tax were equalised to give the same 4.3% population-wide reduction in total alcohol-related mortality: current tax increase, a 13.4% all-product duty increase under the current UK system; a value-based tax, a 4.0% ad valorem tax based on product price; a strength-based tax, a volumetric tax of £0.22 per UK alcohol unit (= 8 g of ethanol); and minimum unit pricing, a minimum price threshold of £0.50 per unit, below which alcohol cannot be sold. Model inputs were calculated by combining data from representative household surveys on alcohol purchasing and consumption, administrative and healthcare data on 43 alcohol-attributable diseases, and published price elasticities and relative risk functions. Outcomes were annual per capita consumption, consumer spending, and alcohol-related deaths. Uncertainty was assessed via partial probabilistic sensitivity analysis (PSA) and scenario analysis. The pricing strategies differ as to how effects are distributed across the population, and, from a public health perspective, heavy drinkers in routine/manual occupations are a key group as they are at greatest risk of health harm from their drinking. Strength-based taxation and minimum unit pricing would have greater effects on mortality among drinkers in routine/manual occupations (particularly for heavy drinkers, where the estimated policy effects on mortality rates are as follows: current tax

  9. PREMIM and EMIM: tools for estimation of maternal, imprinting and interaction effects using multinomial modelling

    Directory of Open Access Journals (Sweden)

    Howey Richard

    2012-06-01

    Full Text Available Abstract Background Here we present two new computer tools, PREMIM and EMIM, for the estimation of parental and child genetic effects, based on genotype data from a variety of different child-parent configurations. PREMIM allows the extraction of child-parent genotype data from standard-format pedigree data files, while EMIM uses the extracted genotype data to perform subsequent statistical analysis. The use of genotype data from the parents as well as from the child in question allows the estimation of complex genetic effects such as maternal genotype effects, maternal-foetal interactions and parent-of-origin (imprinting effects. These effects are estimated by EMIM, incorporating chosen assumptions such as Hardy-Weinberg equilibrium or exchangeability of parental matings as required. Results In application to simulated data, we show that the inference provided by EMIM is essentially equivalent to that provided by alternative (competing software packages such as MENDEL and LEM. However, PREMIM and EMIM (used in combination considerably outperform MENDEL and LEM in terms of speed and ease of execution. Conclusions Together, EMIM and PREMIM provide easy-to-use command-line tools for the analysis of pedigree data, giving unbiased estimates of parental and child genotype relative risks.

  10. Evaluation of the Effective Moisture Penetration Depth Model for Estimating Moisture Buffering in Buildings

    Energy Technology Data Exchange (ETDEWEB)

    Woods, J. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Winkler, J. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Christensen, D. [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2013-01-01

    This study examines the effective moisture penetration depth (EMPD) model, and its suitability for building simulations. The EMPD model is a compromise between the simple, inaccurate effective capacitance approach and the complex, yet accurate, finite-difference approach. Two formulations of the EMPD model were examined, including the model used in the EnergyPlus building simulation software. An error in the EMPD model we uncovered was fixed with the release of EnergyPlus version 7.2, and the EMPD model in earlier versions of EnergyPlus should not be used.

  11. Systematic screening for Chlamydia trachomatis : Estimating cost-effectiveness using dynamic modeling and Dutch data

    NARCIS (Netherlands)

    de Vries, R.; Van Bergen, J.E.A.M.; de Jong-van den Berg, Lolkje; Postma, Maarten

    2006-01-01

    To estimate the cost-effectiveness of a systematic one-off Chlamydia trachomatis (CT) screening program including partner treatment for Dutch young adults. Data on infection prevalence, participation rates, and sexual behavior were obtained from a large pilot study conducted in The Netherlands.

  12. Systematic screening for Chlamydia trachomatis: estimating cost-effectiveness using dynamic modeling and Dutch data

    NARCIS (Netherlands)

    de Vries, Robin; van Bergen, Jan E. A. M.; de Jong-van den Berg, Lolkje T. W.; Postma, Maarten J.

    2006-01-01

    To estimate the cost-effectiveness of a systematic one-off Chlamydia trachomatis (CT) screening program including partner treatment for Dutch young adults. Data on infection prevalence, participation rates, and sexual behavior were obtained from a large pilot study conducted in The Netherlands.

  13. The effect of PLS regression in PLS path model estimation when multicollinearity is present

    DEFF Research Database (Denmark)

    Nielsen, Rikke; Kristensen, Kai; Eskildsen, Jacob

    PLS path modelling has previously been found to be robust to multicollinearity both between latent variables and between manifest variables of a common latent variable (see e.g. Cassel et al. (1999), Kristensen, Eskildsen (2005), Westlund et al. (2008)). However, most of the studies investigate...... models with relatively few variables and very simple dependence structures compared to the models that are often estimated in practical settings. A recent study by Nielsen et al. (2009) found that when model structure is more complex, PLS path modelling is not as robust to multicollinearity between...... latent variables as previously assumed. A difference in the standard error of path coefficients of as much as 83% was found between moderate and severe levels of multicollinearity. Large differences were found not only for large path coefficients, but also for small path coefficients and in some cases...

  14. A Comparison of Uniform DIF Effect Size Estimators under the MIMIC and Rasch Models

    Science.gov (United States)

    Jin, Ying; Myers, Nicholas D.; Ahn, Soyeon; Penfield, Randall D.

    2013-01-01

    The Rasch model, a member of a larger group of models within item response theory, is widely used in empirical studies. Detection of uniform differential item functioning (DIF) within the Rasch model typically employs null hypothesis testing with a concomitant consideration of effect size (e.g., signed area [SA]). Parametric equivalence between…

  15. Estimating genetic effect sizes under joint disease-endophenotype models in presence of gene-environment interactions

    Directory of Open Access Journals (Sweden)

    Alexandre eBureau

    2015-07-01

    Full Text Available Effects of genetic variants on the risk of complex diseases estimated from association studies are typically small. Nonetheless, variants may have important effects in presence of specific levels of environmental exposures, and when a trait related to the disease (endophenotype is either normal or impaired. We propose polytomous and transition models to represent the relationship between disease, endophenotype, genotype and environmental exposure in family studies. Model coefficients were estimated using generalized estimating equations and were used to derive gene-environment interaction effects and genotype effects at specific levels of exposure. In a simulation study, estimates of the effect of a genetic variant were substantially higher when both an endophenotype and an environmental exposure modifying the variant effect were taken into account, particularly under transition models, compared to the alternative of ignoring the endophenotype. Illustration of the proposed modeling with the metabolic syndrome, abdominal obesity, physical activity and polymorphisms in the NOX3 gene in the Quebec Family Study revealed that the positive association of the A allele of rs1375713 with the metabolic syndrome at high levels of physical activity was only detectable in subjects without abdominal obesity, illustrating the importance of taking into account the abdominal obesity endophenotype in this analysis.

  16. A model to estimate effects of SNPs on host susceptibility and infectivity for an endemic infectious disease.

    Science.gov (United States)

    Biemans, Floor; de Jong, Mart C M; Bijma, Piter

    2017-06-30

    Infectious diseases in farm animals affect animal health, decrease animal welfare and can affect human health. Selection and breeding of host individuals with desirable traits regarding infectious diseases can help to fight disease transmission, which is affected by two types of (genetic) traits: host susceptibility and host infectivity. Quantitative genetic studies on infectious diseases generally connect an individual's disease status to its own genotype, and therefore capture genetic effects on susceptibility only. However, they usually ignore variation in exposure to infectious herd mates, which may limit the accuracy of estimates of genetic effects on susceptibility. Moreover, genetic effects on infectivity will exist as well. Thus, to design optimal breeding strategies, it is essential that genetic effects on infectivity are quantified. Given the potential importance of genetic effects on infectivity, we set out to develop a model to estimate the effect of single nucleotide polymorphisms (SNPs) on both host susceptibility and host infectivity. To evaluate the quality of the resulting SNP effect estimates, we simulated an endemic disease in 10 groups of 100 individuals, and recorded time-series data on individual disease status. We quantified bias and precision of the estimates for different sizes of SNP effects, and identified the optimum recording interval when the number of records is limited. We present a generalized linear mixed model to estimate the effect of SNPs on both host susceptibility and host infectivity. SNP effects were on average slightly underestimated, i.e. estimates were conservative. Estimates were less precise for infectivity than for susceptibility. Given our sample size, the power to estimate SNP effects for susceptibility was 100% for differences between genotypes of a factor 1.56 or more, and was higher than 60% for infectivity for differences between genotypes of a factor 4 or more. When disease status was recorded 11 times on each

  17. A computer model of the biosphere, to estimate stochastic and non-stochastic effects of radionuclides on humans

    International Nuclear Information System (INIS)

    Laurens, J.M.

    1985-01-01

    A computer code was written to model food chains in order to estimate the internal and external doses, for stochastic and non-stochastic effects, on humans (adults and infants). Results are given for 67 radionuclides, for unit concentration in water (1 Bq/L) and in atmosphere (1 Bq/m 3 )

  18. ACUTE-TO-CHRONIC ESTIMATION (ACE V 2.0) WITH TIME-CONCENTRATION-EFFECT MODELS: USER MANUAL AND SOFTWARE

    Science.gov (United States)

    Ellersieck, Mark R., Amha Asfaw, Foster L. Mayer, Gary F. Krause, Kai Sun and Gunhee Lee. 2003. Acute-to-Chronic Estimation (ACE v2.0) with Time-Concentration-Effect Models: User Manual and Software. EPA/600/R-03/107. U.S. Environmental Protection Agency, National Health and Envi...

  19. Comparison among Models to Estimate the Shielding Effectiveness Applied to Conductive Textiles

    Directory of Open Access Journals (Sweden)

    Alberto Lopez

    2013-01-01

    Full Text Available The purpose of this paper is to present a comparison among two models and its measurement to calculate the shielding effectiveness of electromagnetic barriers, applying it to conductive textiles. Each one, models a conductive textile as either a (1 wire mesh screen or (2 compact material. Therefore, the objective is to perform an analysis of the models in order to determine which one is a better approximation for electromagnetic shielding fabrics. In order to provide results for the comparison, the shielding effectiveness of the sample has been measured by means of the standard ASTM D4935-99.

  20. Estimation of causal mediation effects for a dichotomous outcome in multiple-mediator models using the mediation formula.

    Science.gov (United States)

    Wang, Wei; Nelson, Suchitra; Albert, Jeffrey M

    2013-10-30

    Mediators are intermediate variables in the causal pathway between an exposure and an outcome. Mediation analysis investigates the extent to which exposure effects occur through these variables, thus revealing causal mechanisms. In this paper, we consider the estimation of the mediation effect when the outcome is binary and multiple mediators of different types exist. We give a precise definition of the total mediation effect as well as decomposed mediation effects through individual or sets of mediators using the potential outcomes framework. We formulate a model of joint distribution (probit-normal) using continuous latent variables for any binary mediators to account for correlations among multiple mediators. A mediation formula approach is proposed to estimate the total mediation effect and decomposed mediation effects based on this parametric model. Estimation of mediation effects through individual or subsets of mediators requires an assumption involving the joint distribution of multiple counterfactuals. We conduct a simulation study that demonstrates low bias of mediation effect estimators for two-mediator models with various combinations of mediator types. The results also show that the power to detect a nonzero total mediation effect increases as the correlation coefficient between two mediators increases, whereas power for individual mediation effects reaches a maximum when the mediators are uncorrelated. We illustrate our approach by applying it to a retrospective cohort study of dental caries in adolescents with low and high socioeconomic status. Sensitivity analysis is performed to assess the robustness of conclusions regarding mediation effects when the assumption of no unmeasured mediator-outcome confounders is violated. Copyright © 2013 John Wiley & Sons, Ltd.

  1. Estimation of Causal Mediation Effects for a Dichotomous Outcome in Multiple-Mediator Models using the Mediation Formula

    Science.gov (United States)

    Nelson, Suchitra; Albert, Jeffrey M.

    2013-01-01

    Mediators are intermediate variables in the causal pathway between an exposure and an outcome. Mediation analysis investigates the extent to which exposure effects occur through these variables, thus revealing causal mechanisms. In this paper, we consider the estimation of the mediation effect when the outcome is binary and multiple mediators of different types exist. We give a precise definition of the total mediation effect as well as decomposed mediation effects through individual or sets of mediators using the potential outcomes framework. We formulate a model of joint distribution (probit-normal) using continuous latent variables for any binary mediators to account for correlations among multiple mediators. A mediation formula approach is proposed to estimate the total mediation effect and decomposed mediation effects based on this parametric model. Estimation of mediation effects through individual or subsets of mediators requires an assumption involving the joint distribution of multiple counterfactuals. We conduct a simulation study that demonstrates low bias of mediation effect estimators for two-mediator models with various combinations of mediator types. The results also show that the power to detect a non-zero total mediation effect increases as the correlation coefficient between two mediators increases, while power for individual mediation effects reaches a maximum when the mediators are uncorrelated. We illustrate our approach by applying it to a retrospective cohort study of dental caries in adolescents with low and high socioeconomic status. Sensitivity analysis is performed to assess the robustness of conclusions regarding mediation effects when the assumption of no unmeasured mediator-outcome confounders is violated. PMID:23650048

  2. A simulation model to estimate the cost and effectiveness of alternative dialysis initiation strategies.

    Science.gov (United States)

    Lee, Chris P; Chertow, Glenn M; Zenios, Stefanos A

    2006-01-01

    Patients with end-stage renal disease (ESRD) require dialysis to maintain survival. The optimal timing of dialysis initiation in terms of cost-effectiveness has not been established. We developed a simulation model of individuals progressing towards ESRD and requiring dialysis. It can be used to analyze dialysis strategies and scenarios. It was embedded in an optimization frame worked to derive improved strategies. Actual (historical) and simulated survival curves and hospitalization rates were virtually indistinguishable. The model overestimated transplantation costs (10%) but it was related to confounding by Medicare coverage. To assess the model's robustness, we examined several dialysis strategies while input parameters were perturbed. Under all 38 scenarios, relative rankings remained unchanged. An improved policy for a hypothetical patient was derived using an optimization algorithm. The model produces reliable results and is robust. It enables the cost-effectiveness analysis of dialysis strategies.

  3. Estimating required information size by quantifying diversity in random-effects model meta-analyses

    DEFF Research Database (Denmark)

    Wetterslev, Jørn; Thorlund, Kristian; Brok, Jesper

    2009-01-01

    an intervention effect suggested by trials with low-risk of bias. METHODS: Information size calculations need to consider the total model variance in a meta-analysis to control type I and type II errors. Here, we derive an adjusting factor for the required information size under any random-effects model meta......-analysis. RESULTS: We devise a measure of diversity (D2) in a meta-analysis, which is the relative variance reduction when the meta-analysis model is changed from a random-effects into a fixed-effect model. D2 is the percentage that the between-trial variability constitutes of the sum of the between...... and interpreted using several simulations and clinical examples. In addition we show mathematically that diversity is equal to or greater than inconsistency, that is D2 >or= I2, for all meta-analyses. CONCLUSION: We conclude that D2 seems a better alternative than I2 to consider model variation in any random...

  4. Estimation and interpretation of genetic effects with epistasis using the NOIA model.

    Science.gov (United States)

    Alvarez-Castro, José M; Carlborg, Orjan; Rönnegård, Lars

    2012-01-01

    We introduce this communication with a brief outline of the historical landmarks in genetic modeling, especially concerning epistasis. Then, we present methods for the use of genetic modeling in QTL analyses. In particular, we summarize the essential expressions of the natural and orthogonal interactions (NOIA) model of genetic effects. Our motivation for reviewing that theory here is twofold. First, this review presents a digest of the expressions for the application of the NOIA model, which are often mixed with intermediate and additional formulae in the original articles. Second, we make the required theory handy for the reader to relate the genetic concepts to the particular mathematical expressions underlying them. We illustrate those relations by providing graphical interpretations and a diagram summarizing the key features for applying genetic modeling with epistasis in comprehensive QTL analyses. Finally, we briefly review some examples of the application of NOIA to real data and the way it improves the interpretability of the results.

  5. Evaluation of Simulation Models that Estimate the Effect of Dietary Strategies on Nutritional Intake: A Systematic Review.

    Science.gov (United States)

    Grieger, Jessica A; Johnson, Brittany J; Wycherley, Thomas P; Golley, Rebecca K

    2017-05-01

    Background: Dietary simulation modeling can predict dietary strategies that may improve nutritional or health outcomes. Objectives: The study aims were to undertake a systematic review of simulation studies that model dietary strategies aiming to improve nutritional intake, body weight, and related chronic disease, and to assess the methodologic and reporting quality of these models. Methods: The Preferred Reporting Items for Systematic Reviews and Meta-Analyses guided the search strategy with studies located through electronic searches [Cochrane Library, Ovid (MEDLINE and Embase), EBSCOhost (CINAHL), and Scopus]. Study findings were described and dietary modeling methodology and reporting quality were critiqued by using a set of quality criteria adapted for dietary modeling from general modeling guidelines. Results: Forty-five studies were included and categorized as modeling moderation, substitution, reformulation, or promotion dietary strategies. Moderation and reformulation strategies targeted individual nutrients or foods to theoretically improve one particular nutrient or health outcome, estimating small to modest improvements. Substituting unhealthy foods with healthier choices was estimated to be effective across a range of nutrients, including an estimated reduction in intake of saturated fatty acids, sodium, and added sugar. Promotion of fruits and vegetables predicted marginal changes in intake. Overall, the quality of the studies was moderate to high, with certain features of the quality criteria consistently reported. Conclusions: Based on the results of reviewed simulation dietary modeling studies, targeting a variety of foods rather than individual foods or nutrients theoretically appears most effective in estimating improvements in nutritional intake, particularly reducing intake of nutrients commonly consumed in excess. A combination of strategies could theoretically be used to deliver the best improvement in outcomes. Study quality was moderate to

  6. The Displacement Effect of Labour-Market Programs: Estimates from the MONASH Model

    OpenAIRE

    Peter B. Dixon; Maureen T. Rimmer

    2005-01-01

    A key question concerning labour-market programs is the extent to which they generate jobs for their target group at the expense of others. This effect is measured by displacement percentages. We describe a version of the MONASH model designed to quantify the effects of labour-market programs. Our simulation results suggests that: (1) labour-market programs can generate significant long-run increases in employment; (2) displacement percentages depend on how a labour-market program affects the...

  7. SPSS and SAS procedures for estimating indirect effects in simple mediation models.

    Science.gov (United States)

    Preacher, Kristopher J; Hayes, Andrew F

    2004-11-01

    Researchers often conduct mediation analysis in order to indirectly assess the effect of a proposed cause on some outcome through a proposed mediator. The utility of mediation analysis stems from its ability to go beyond the merely descriptive to a more functional understanding of the relationships among variables. A necessary component of mediation is a statistically and practically significant indirect effect. Although mediation hypotheses are frequently explored in psychological research, formal significance tests of indirect effects are rarely conducted. After a brief overview of mediation, we argue the importance of directly testing the significance of indirect effects and provide SPSS and SAS macros that facilitate estimation of the indirect effect with a normal theory approach and a bootstrap approach to obtaining confidence intervals, as well as the traditional approach advocated by Baron and Kenny (1986). We hope that this discussion and the macros will enhance the frequency of formal mediation tests in the psychology literature. Electronic copies of these macros may be downloaded from the Psychonomic Society's Web archive at www.psychonomic.org/archive/.

  8. Instrumental variables estimation of exposure effects on a time-to-event endpoint using structural cumulative survival models.

    Science.gov (United States)

    Martinussen, Torben; Vansteelandt, Stijn; Tchetgen Tchetgen, Eric J; Zucker, David M

    2017-12-01

    The use of instrumental variables for estimating the effect of an exposure on an outcome is popular in econometrics, and increasingly so in epidemiology. This increasing popularity may be attributed to the natural occurrence of instrumental variables in observational studies that incorporate elements of randomization, either by design or by nature (e.g., random inheritance of genes). Instrumental variables estimation of exposure effects is well established for continuous outcomes and to some extent for binary outcomes. It is, however, largely lacking for time-to-event outcomes because of complications due to censoring and survivorship bias. In this article, we make a novel proposal under a class of structural cumulative survival models which parameterize time-varying effects of a point exposure directly on the scale of the survival function; these models are essentially equivalent with a semi-parametric variant of the instrumental variables additive hazards model. We propose a class of recursive instrumental variable estimators for these exposure effects, and derive their large sample properties along with inferential tools. We examine the performance of the proposed method in simulation studies and illustrate it in a Mendelian randomization study to evaluate the effect of diabetes on mortality using data from the Health and Retirement Study. We further use the proposed method to investigate potential benefit from breast cancer screening on subsequent breast cancer mortality based on the HIP-study. © 2017, The International Biometric Society.

  9. Analytical estimation of effective charges at saturation in Poisson-Boltzmann cell models

    International Nuclear Information System (INIS)

    Trizac, Emmanuel; Aubouy, Miguel; Bocquet, Lyderic

    2003-01-01

    We propose a simple approximation scheme for computing the effective charges of highly charged colloids (spherical or cylindrical with infinite length). Within non-linear Poisson-Boltzmann theory, we start from an expression for the effective charge in the infinite-dilution limit which is asymptotically valid for large salt concentrations; this result is then extended to finite colloidal concentration, approximating the salt partitioning effect which relates the salt content in the suspension to that of a dialysing reservoir. This leads to an analytical expression for the effective charge as a function of colloid volume fraction and salt concentration. These results compare favourably with the effective charges at saturation (i.e. in the limit of large bare charge) computed numerically following the standard prescription proposed by Alexander et al within the cell model

  10. Understanding the Effect of Land Cover Classification on Model Estimates of Regional Carbon Cycling in the Boreal Forest Biome

    Science.gov (United States)

    Kimball, John; Kang, Sinkyu

    2003-01-01

    The original objectives of this proposed 3-year project were to: 1) quantify the respective contributions of land cover and disturbance (i.e., wild fire) to uncertainty associated with regional carbon source/sink estimates produced by a variety of boreal ecosystem models; 2) identify the model processes responsible for differences in simulated carbon source/sink patterns for the boreal forest; 3) validate model outputs using tower and field- based estimates of NEP and NPP; and 4) recommend/prioritize improvements to boreal ecosystem carbon models, which will better constrain regional source/sink estimates for atmospheric C02. These original objectives were subsequently distilled to fit within the constraints of a 1 -year study. This revised study involved a regional model intercomparison over the BOREAS study region involving Biome-BGC, and TEM (A.D. McGuire, UAF) ecosystem models. The major focus of these revised activities involved quantifying the sensitivity of regional model predictions associated with land cover classification uncertainties. We also evaluated the individual and combined effects of historical fire activity, historical atmospheric CO2 concentrations, and climate change on carbon and water flux simulations within the BOREAS study region.

  11. Estimation of effective wind speed

    Science.gov (United States)

    Østergaard, K. Z.; Brath, P.; Stoustrup, J.

    2007-07-01

    The wind speed has a huge impact on the dynamic response of wind turbine. Because of this, many control algorithms use a measure of the wind speed to increase performance, e.g. by gain scheduling and feed forward. Unfortunately, no accurate measurement of the effective wind speed is online available from direct measurements, which means that it must be estimated in order to make such control methods applicable in practice. In this paper a new method is presented for the estimation of the effective wind speed. First, the rotor speed and aerodynamic torque are estimated by a combined state and input observer. These two variables combined with the measured pitch angle is then used to calculate the effective wind speed by an inversion of a static aerodynamic model.

  12. Effect of two viscosity models on lethality estimation in sterilization of liquid canned foods.

    Science.gov (United States)

    Calderón-Alvarado, M P; Alvarado-Orozco, J M; Herrera-Hernández, E C; Martínez-González, G M; Miranda-López, R; Jiménez-Islas, H

    2016-09-01

    A numerical study on 2D natural convection in cylindrical cavities during the sterilization of liquid foods was performed. The mathematical model was established on momentum and energy balances and predicts both the heating dynamics of the slowest heating zone (SHZ) and the lethal rate achieved in homogeneous liquid canned foods. Two sophistication levels were proposed in viscosity modelling: 1) considering average viscosity and 2) using an Arrhenius-type model to include the effect of temperature on viscosity. The remaining thermodynamic properties were kept constant. The governing equations were spatially discretized via orthogonal collocation (OC) with mesh size of 25 × 25. Computational simulations were performed using proximate and thermodynamic data for carrot-orange soup, broccoli-cheddar soup, tomato puree, and cream-style corn. Flow patterns, isothermals, heating dynamics of the SHZ, and the sterilization rate achieved for the cases studied were compared for both viscosity models. The dynamics of coldest point and the lethal rate F0 in all food fluids studied were approximately equal in both cases, although the second sophistication level is closer to physical behavior. The model accuracy was compared favorably with reported sterilization time for cream-style corn packed at 303 × 406 can size, predicting 66 min versus an experimental time of 68 min at retort temperature of 121.1 ℃. © The Author(s) 2016.

  13. Time-varying effect moderation using the structural nested mean model: estimation using inverse-weighted regression with residuals

    Science.gov (United States)

    Almirall, Daniel; Griffin, Beth Ann; McCaffrey, Daniel F.; Ramchand, Rajeev; Yuen, Robert A.; Murphy, Susan A.

    2014-01-01

    This article considers the problem of examining time-varying causal effect moderation using observational, longitudinal data in which treatment, candidate moderators, and possible confounders are time varying. The structural nested mean model (SNMM) is used to specify the moderated time-varying causal effects of interest in a conditional mean model for a continuous response given time-varying treatments and moderators. We present an easy-to-use estimator of the SNMM that combines an existing regression-with-residuals (RR) approach with an inverse-probability-of-treatment weighting (IPTW) strategy. The RR approach has been shown to identify the moderated time-varying causal effects if the time-varying moderators are also the sole time-varying confounders. The proposed IPTW+RR approach provides estimators of the moderated time-varying causal effects in the SNMM in the presence of an additional, auxiliary set of known and measured time-varying confounders. We use a small simulation experiment to compare IPTW+RR versus the traditional regression approach and to compare small and large sample properties of asymptotic versus bootstrap estimators of the standard errors for the IPTW+RR approach. This article clarifies the distinction between time-varying moderators and time-varying confounders. We illustrate the methodology in a case study to assess if time-varying substance use moderates treatment effects on future substance use. PMID:23873437

  14. Toward Quantitative Estimation of the Effect of Aerosol Particles in the Global Climate Model and Cloud Resolving Model

    Science.gov (United States)

    Eskes, H.; Boersma, F.; Dirksen, R.; van der A, R.; Veefkind, P.; Levelt, P.; Brinksma, E.; van Roozendael, M.; de Smedt, I.; Gleason, J.

    2005-05-01

    Based on measurements of GOME on ESA ERS-2, SCIAMACHY on ESA-ENVISAT, and Ozone Monitoring Instrument (OMI) on the NASA EOS-Aura satellite there is now a unique 11-year dataset of global tropospheric nitrogen dioxide measurements from space. The retrieval approach consists of two steps. The first step is an application of the DOAS (Differential Optical Absorption Spectroscopy) approach which delivers the total absorption optical thickness along the light path (the slant column). For GOME and SCIAMACHY this is based on the DOAS implementation developed by BIRA/IASB. For OMI the DOAS implementation was developed in a collaboration between KNMI and NASA. The second retrieval step, developed at KNMI, estimates the tropospheric vertical column of NO2 based on the slant column, cloud fraction and cloud top height retrieval, stratospheric column estimates derived from a data assimilation approach and vertical profile estimates from space-time collocated profiles from the TM chemistry-transport model. The second step was applied with only minor modifications to all three instruments to generate a uniform 11-year data set. In our talk we will address the following topics: - A short summary of the retrieval approach and results - Comparisons with other retrievals - Comparisons with global and regional-scale models - OMI-SCIAMACHY and SCIAMACHY-GOME comparisons - Validation with independent measurements - Trend studies of NO2 for the past 11 years

  15. Model estimation of land-use effects on water levels of northern Prairie wetlands

    Science.gov (United States)

    Voldseth, R.A.; Johnson, W.C.; Gilmanov, T.; Guntenspergen, G.R.; Millett, B.V.

    2007-01-01

    Wetlands of the Prairie Pothole Region exist in a matrix of grassland dominated by intensive pastoral and cultivation agriculture. Recent conservation management has emphasized the conversion of cultivated farmland and degraded pastures to intact grassland to improve upland nesting habitat. The consequences of changes in land-use cover that alter watershed processes have not been evaluated relative to their effect on the water budgets and vegetation dynamics of associated wetlands. We simulated the effect of upland agricultural practices on the water budget and vegetation of a semipermanent prairie wetland by modifying a previously published mathematical model (WETSIM). Watershed cover/land-use practices were categorized as unmanaged grassland (native grass, smooth brome), managed grassland (moderately heavily grazed, prescribed burned), cultivated crops (row crop, small grain), and alfalfa hayland. Model simulations showed that differing rates of evapotranspiration and runoff associated with different upland plant-cover categories in the surrounding catchment produced differences in wetland water budgets and linked ecological dynamics. Wetland water levels were highest and vegetation the most dynamic under the managed-grassland simulations, while water levels were the lowest and vegetation the least dynamic under the unmanaged-grassland simulations. The modeling results suggest that unmanaged grassland, often planted for waterfowl nesting, may produce the least favorable wetland conditions for birds, especially in drier regions of the Prairie Pothole Region. These results stand as hypotheses that urgently need to be verified with empirical data.

  16. The Effect of Nonzero Autocorrelation Coefficients on the Distributions of Durbin-Watson Test Estimator: Three Autoregressive Models

    Directory of Open Access Journals (Sweden)

    Mei-Yu LEE

    2014-11-01

    Full Text Available This paper investigates the effect of the nonzero autocorrelation coefficients on the sampling distributions of the Durbin-Watson test estimator in three time-series models that have different variance-covariance matrix assumption, separately. We show that the expected values and variances of the Durbin-Watson test estimator are slightly different, but the skewed and kurtosis coefficients are considerably different among three models. The shapes of four coefficients are similar between the Durbin-Watson model and our benchmark model, but are not the same with the autoregressive model cut by one-lagged period. Second, the large sample case shows that the three models have the same expected values, however, the autoregressive model cut by one-lagged period explores different shapes of variance, skewed and kurtosis coefficients from the other two models. This implies that the large samples lead to the same expected values, 2(1 – ρ0, whatever the variance-covariance matrix of the errors is assumed. Finally, comparing with the two sample cases, the shape of each coefficient is almost the same, moreover, the autocorrelation coefficients are negatively related with expected values, are inverted-U related with variances, are cubic related with skewed coefficients, and are U related with kurtosis coefficients.

  17. Numerical estimation of ultrasonic production of hydrogen: Effect of ideal and real gas based models.

    Science.gov (United States)

    Kerboua, Kaouther; Hamdaoui, Oualid

    2018-01-01

    Based on two different assumptions regarding the equation describing the state of the gases within an acoustic cavitation bubble, this paper studies the sonochemical production of hydrogen, through two numerical models treating the evolution of a chemical mechanism within a single bubble saturated with oxygen during an oscillation cycle in water. The first approach is built on an ideal gas model, while the second one is founded on Van der Waals equation, and the main objective was to analyze the effect of the considered state equation on the ultrasonic hydrogen production retrieved by simulation under various operating conditions. The obtained results show that even when the second approach gives higher values of temperature, pressure and total free radicals production, yield of hydrogen does not follow the same trend. When comparing the results released by both models regarding hydrogen production, it was noticed that the ratio of the molar amount of hydrogen is frequency and acoustic amplitude dependent. The use of Van der Waals equation leads to higher quantities of hydrogen under low acoustic amplitude and high frequencies, while employing ideal gas law based model gains the upper hand regarding hydrogen production at low frequencies and high acoustic amplitudes. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. NASA Software Cost Estimation Model: An Analogy Based Estimation Model

    Science.gov (United States)

    Hihn, Jairus; Juster, Leora; Menzies, Tim; Mathew, George; Johnson, James

    2015-01-01

    The cost estimation of software development activities is increasingly critical for large scale integrated projects such as those at DOD and NASA especially as the software systems become larger and more complex. As an example MSL (Mars Scientific Laboratory) developed at the Jet Propulsion Laboratory launched with over 2 million lines of code making it the largest robotic spacecraft ever flown (Based on the size of the software). Software development activities are also notorious for their cost growth, with NASA flight software averaging over 50% cost growth. All across the agency, estimators and analysts are increasingly being tasked to develop reliable cost estimates in support of program planning and execution. While there has been extensive work on improving parametric methods there is very little focus on the use of models based on analogy and clustering algorithms. In this paper we summarize our findings on effort/cost model estimation and model development based on ten years of software effort estimation research using data mining and machine learning methods to develop estimation models based on analogy and clustering. The NASA Software Cost Model performance is evaluated by comparing it to COCOMO II, linear regression, and K-­ nearest neighbor prediction model performance on the same data set.

  19. Estimated effect of alcohol pricing policies on health and health economic outcomes in England: an epidemiological model.

    Science.gov (United States)

    Purshouse, Robin C; Meier, Petra S; Brennan, Alan; Taylor, Karl B; Rafia, Rachid

    2010-04-17

    Although pricing policies for alcohol are known to be effective, little is known about how specific interventions affect health-care costs and health-related quality-of-life outcomes for different types of drinkers. We assessed effects of alcohol pricing and promotion policy options in various population subgroups. We built an epidemiological mathematical model to appraise 18 pricing policies, with English data from the Expenditure and Food Survey and the General Household Survey for average and peak alcohol consumption. We used results from econometric analyses (256 own-price and cross-price elasticity estimates) to estimate effects of policies on alcohol consumption. We applied risk functions from systemic reviews and meta-analyses, or derived from attributable fractions, to model the effect of consumption changes on mortality and disease prevalence for 47 illnesses. General price increases were effective for reduction of consumption, health-care costs, and health-related quality of life losses in all population subgroups. Minimum pricing policies can maintain this level of effectiveness for harmful drinkers while reducing effects on consumer spending for moderate drinkers. Total bans of supermarket and off-license discounting are effective but banning only large discounts has little effect. Young adult drinkers aged 18-24 years are especially affected by policies that raise prices in pubs and bars. Minimum pricing policies and discounting restrictions might warrant further consideration because both strategies are estimated to reduce alcohol consumption, and related health harms and costs, with drinker spending increases targeting those who incur most harm. Policy Research Programme, UK Department of Health. Copyright 2010 Elsevier Ltd. All rights reserved.

  20. A Fuzzy mathematical model to estimate the effects of global warming on the vitality of Laelia purpurata orchids.

    Science.gov (United States)

    Putti, Fernando Ferrari; Filho, Luis Roberto Almeida Gabriel; Gabriel, Camila Pires Cremasco; Neto, Alfredo Bonini; Bonini, Carolina Dos Santos Batista; Rodrigues Dos Reis, André

    2017-06-01

    This study aimed to develop a fuzzy mathematical model to estimate the impacts of global warming on the vitality of Laelia purpurata growing in different Brazilian environmental conditions. In order to develop the mathematical model was considered as intrinsic factors the parameters: temperature, humidity and shade conditions to determine the vitality of plants. Fuzzy model results could accurately predict the optimal conditions for cultivation of Laelia purpurata in several sites of Brazil. Based on fuzzy model results, we found that higher temperatures and lacking of properly shading can reduce the vitality of orchids. Fuzzy mathematical model could precisely detect the effect of higher temperatures causing damages on vitality of plants as a consequence of global warming. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Effects of censoring on parameter estimates and power in genetic modeling

    NARCIS (Netherlands)

    Derks, Eske M.; Dolan, Conor V.; Boomsma, Dorret I.

    2004-01-01

    Genetic and environmental influences on variance in phenotypic traits may be estimated with normal theory Maximum Likelihood (ML). However, when the assumption of multivariate normality is not met, this method may result in biased parameter estimates and incorrect likelihood ratio tests. We

  2. Effects of censoring on parameter estimates and power in genetic modeling.

    NARCIS (Netherlands)

    Derks, E.M.; Dolan, C.V.; Boomsma, D.I.

    2004-01-01

    Genetic and environmental influences on variance in phenotypic traits may be estimated with normal theory Maximum Likelihood (ML). However, when the assumption of multivariate normality is not met, this method may result in biased parameter estimates and incorrect likelihood ratio tests. We

  3. Estimating Divergence Time and Ancestral Effective Population Size of Bornean and Sumatran Orangutan Subspecies Using a Coalescent Hidden Markov Model

    DEFF Research Database (Denmark)

    Mailund, Thomas; Dutheil, Julien; Hobolth, Asger

    2011-01-01

    event has occurred to split them apart. The size of these segments of constant divergence depends on the recombination rate, but also on the speciation time, the effective population size of the ancestral population, as well as demographic effects and selection. Thus, inference of these parameters may......, and the ancestral effective population size. The model is efficient enough to allow inference on whole-genome data sets. We first investigate the power and consistency of the model with coalescent simulations and then apply it to the whole-genome sequences of the two orangutan sub-species, Bornean (P. p. pygmaeus......) and Sumatran (P. p. abelii) orangutans from the Orangutan Genome Project. We estimate the speciation time between the two sub-species to be thousand years ago and the effective population size of the ancestral orangutan species to be , consistent with recent results based on smaller data sets. We also report...

  4. Estimating the Cost-Effectiveness of HIV Prevention Programmes in Vietnam, 2006-2010: A Modelling Study.

    Directory of Open Access Journals (Sweden)

    Quang Duy Pham

    Full Text Available Vietnam has been largely reliant on international support in its HIV response. Over 2006-2010, a total of US$480 million was invested in its HIV programmes, more than 70% of which came from international sources. This study investigates the potential epidemiological impacts of these programmes and their cost-effectiveness.We conducted a data synthesis of HIV programming, spending, epidemiological, and clinical outcomes. Counterfactual scenarios were defined based on assumed programme coverage and behaviours had the programmes not been implemented. An epidemiological model, calibrated to reflect the actual epidemiological trends, was used to estimate plausible ranges of programme impacts. The model was then used to estimate the costs per averted infection, death, and disability adjusted life-year (DALY.Based on observed prevalence reductions amongst most population groups, and plausible counterfactuals, modelling suggested that antiretroviral therapy (ART and prevention programmes over 2006-2010 have averted an estimated 50,600 [95% uncertainty bound: 36,300-68,900] new infections and 42,600 [36,100-54,100] deaths, resulting in 401,600 [312,200-496,300] fewer DALYs across all population groups. HIV programmes in Vietnam have cost an estimated US$1,972 [1,447-2,747], US$2,344 [1,843-2,765], and US$248 [201-319] for each averted infection, death, and DALY, respectively.Our evaluation suggests that HIV programmes in Vietnam have most likely had benefits that are cost-effective. ART and direct HIV prevention were the most cost-effective interventions in reducing HIV disease burden.

  5. Estimating the Cost-Effectiveness of HIV Prevention Programmes in Vietnam, 2006-2010: A Modelling Study

    Science.gov (United States)

    Pham, Quang Duy; Wilson, David P.; Kerr, Cliff C.; Shattock, Andrew J.; Do, Hoa Mai; Duong, Anh Thuy; Nguyen, Long Thanh; Zhang, Lei

    2015-01-01

    Introduction Vietnam has been largely reliant on international support in its HIV response. Over 2006-2010, a total of US$480 million was invested in its HIV programmes, more than 70% of which came from international sources. This study investigates the potential epidemiological impacts of these programmes and their cost-effectiveness. Methods We conducted a data synthesis of HIV programming, spending, epidemiological, and clinical outcomes. Counterfactual scenarios were defined based on assumed programme coverage and behaviours had the programmes not been implemented. An epidemiological model, calibrated to reflect the actual epidemiological trends, was used to estimate plausible ranges of programme impacts. The model was then used to estimate the costs per averted infection, death, and disability adjusted life-year (DALY). Results Based on observed prevalence reductions amongst most population groups, and plausible counterfactuals, modelling suggested that antiretroviral therapy (ART) and prevention programmes over 2006-2010 have averted an estimated 50,600 [95% uncertainty bound: 36,300–68,900] new infections and 42,600 [36,100–54,100] deaths, resulting in 401,600 [312,200–496,300] fewer DALYs across all population groups. HIV programmes in Vietnam have cost an estimated US$1,972 [1,447–2,747], US$2,344 [1,843–2,765], and US$248 [201–319] for each averted infection, death, and DALY, respectively. Conclusions Our evaluation suggests that HIV programmes in Vietnam have most likely had benefits that are cost-effective. ART and direct HIV prevention were the most cost-effective interventions in reducing HIV disease burden. PMID:26196290

  6. Software Cost-Estimation Model

    Science.gov (United States)

    Tausworthe, R. C.

    1985-01-01

    Software Cost Estimation Model SOFTCOST provides automated resource and schedule model for software development. Combines several cost models found in open literature into one comprehensive set of algorithms. Compensates for nearly fifty implementation factors relative to size of task, inherited baseline, organizational and system environment and difficulty of task.

  7. Effects of shipping on marine acoustic habitats in Canadian Arctic estimated via probabilistic modeling and mapping.

    Science.gov (United States)

    Aulanier, Florian; Simard, Yvan; Roy, Nathalie; Gervaise, Cédric; Bandet, Marion

    2017-12-15

    Canadian Arctic and Subarctic regions experience a rapid decrease of sea ice accompanied with increasing shipping traffic. The resulting time-space changes in shipping noise are studied for four key regions of this pristine environment, for 2013 traffic conditions and a hypothetical tenfold traffic increase. A probabilistic modeling and mapping framework, called Ramdam, which integrates the intrinsic variability and uncertainties of shipping noise and its effects on marine habitats, is developed and applied. A substantial transformation of soundscapes is observed in areas where shipping noise changes from present occasional-transient contributor to a dominant noise source. Examination of impacts on low-frequency mammals within ecologically and biologically significant areas reveals that shipping noise has the potential to trigger behavioral responses and masking in the future, although no risk of temporary or permanent hearing threshold shifts is noted. Such probabilistic modeling and mapping is strategic in marine spatial planning of this emerging noise issues. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.

  8. Impact of Precipitating Ice Hydrometeors on Longwave Radiative Effect Estimated by a Global Cloud-System Resolving Model

    Science.gov (United States)

    Chen, Ying-Wen; Seiki, Tatsuya; Kodama, Chihiro; Satoh, Masaki; Noda, Akira T.

    2018-02-01

    Satellite observation and general circulation model (GCM) studies suggest that precipitating ice makes nonnegligible contributions to the radiation balance of the Earth. However, in most GCMs, precipitating ice is diagnosed and its radiative effects are not taken into account. Here we examine the longwave radiative impact of precipitating ice using a global nonhydrostatic atmospheric model with a double-moment cloud microphysics scheme. An off-line radiation model is employed to determine cloud radiative effects according to the amount and altitude of each type of ice hydrometeor. Results show that the snow radiative effect reaches 2 W m-2 in the tropics, which is about half the value estimated by previous studies. This effect is strongly dependent on the vertical separation of ice categories and is partially generated by differences in terminal velocities, which are not represented in GCMs with diagnostic precipitating ice. Results from sensitivity experiments that artificially change the categories and altitudes of precipitating ice show that the simulated longwave heating profile and longwave radiation field are sensitive to the treatment of precipitating ice in models. This study emphasizes the importance of incorporating appropriate treatments for the radiative effects of precipitating ice in cloud and radiation schemes in GCMs in order to capture the cloud radiative effects of upper level clouds.

  9. Spatial scale effects on model parameter estimation and predictive uncertainty in ungauged basins

    CSIR Research Space (South Africa)

    Hughes, DA

    2013-06-01

    Full Text Available . The choice of model structure has been a major topic of discussion throughout the history of hydrological modelling and it is quite rare to find consensus amongst the broad community of model developers and users. With respect to conceptual type models...

  10. Using Hedonic price model to estimate effects of flood on real ...

    African Journals Online (AJOL)

    Distances were measured in metres from the centroid of the building to the edge of the river and roads using Global Positioning System. The result of the estimation shows that property located within the floodplain are lowers in value by an average of N 493, 408 which represents 6.8 percent reduction in sales price for an ...

  11. A Development of Domestic Food Chain Model Data for Chronic Effect Estimation of Off-site Consequence Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Han, Seok-Jung; KEUM, Dong-Kwon; Jang, Seung-Cheol [KAERI, Daejeon (Korea, Republic of)

    2015-05-15

    The FCM includes complex transport phenomena of radiation materials on a biokinetic system of contaminated environments. An estimation of chronic health effects is a key part of the level 3 PSA (Probabilistic Safety Assessment), which depends on the FCM estimation from contaminated foods ingestion. A cultural ingestion habit of a local region and agricultural productions are different to the general features over worldwide scale or case by case. This is a reason to develop a domestic FCM data for the level 3 PSA. However, a generation of the specific FCM data is a complex process and under a large degree of uncertainty due to inherent biokinetic models. As a preliminary study, the present study focuses on an infrastructure development to generation of a specific FCM data. During this process, the features of FCM data to generate a domestic FCM data were investigated. Based on the insights obtained from this process, a specific domestic FCM data was developed. The present study was developed a domestic FCM data to estimate the chronic health effects of off-site consequence analysis. From this study, an insight was obtained, that a domestic FCM data is roughly 20 times higher than the MACCS2 defaults data. Based on this observation, it is clear that the specific chronic health effects of a domestic plant site should be considered in the off-site consequence analysis.

  12. A Development of Domestic Food Chain Model Data for Chronic Effect Estimation of Off-site Consequence Analysis

    International Nuclear Information System (INIS)

    Han, Seok-Jung; KEUM, Dong-Kwon; Jang, Seung-Cheol

    2015-01-01

    The FCM includes complex transport phenomena of radiation materials on a biokinetic system of contaminated environments. An estimation of chronic health effects is a key part of the level 3 PSA (Probabilistic Safety Assessment), which depends on the FCM estimation from contaminated foods ingestion. A cultural ingestion habit of a local region and agricultural productions are different to the general features over worldwide scale or case by case. This is a reason to develop a domestic FCM data for the level 3 PSA. However, a generation of the specific FCM data is a complex process and under a large degree of uncertainty due to inherent biokinetic models. As a preliminary study, the present study focuses on an infrastructure development to generation of a specific FCM data. During this process, the features of FCM data to generate a domestic FCM data were investigated. Based on the insights obtained from this process, a specific domestic FCM data was developed. The present study was developed a domestic FCM data to estimate the chronic health effects of off-site consequence analysis. From this study, an insight was obtained, that a domestic FCM data is roughly 20 times higher than the MACCS2 defaults data. Based on this observation, it is clear that the specific chronic health effects of a domestic plant site should be considered in the off-site consequence analysis

  13. Estimating the effects of wetland conservation practices in croplands: Approaches for modeling in CEAP–Cropland Assessment

    Science.gov (United States)

    De Steven, Diane; Mushet, David

    2018-01-01

    Quantifying the current and potential benefits of conservation practices can be a valuable tool for encouraging greater practice adoption on agricultural lands. A goal of the CEAP-Cropland Assessment is to estimate the environmental effects of conservation practices that reduce losses (exports) of soil, nutrients, and pesticides from farmlands to streams and rivers. The assessment approach combines empirical data on reported cropland practices with simulation modeling that compares field-level exports for scenarios “with practices” and “without practices.” Conserved, restored, and created wetlands collectively represent conservation practices that can influence sediment and nutrient exports from croplands. However, modeling the role of wetlands within croplands presents some challenges, including the potential for negative impacts of sediment and nutrient inputs on wetland functions. This Science Note outlines some preliminary solutions for incorporating wetlands and wetland practices into the CEAP-Cropland modeling framework. First, modeling the effects of wetland practices requires identifying wetland hydrogeomorphic type and accounting for the condition of both the wetland and an adjacent upland zone. Second, modeling is facilitated by classifying wetland-related practices into two functional categories (wetland and upland buffer). Third, simulating practice effects requires alternative field configurations to account for hydrological differences among wetland types. These ideas are illustrated for two contrasting wetland types (riparian and depressional).

  14. Matrix Population Model for Estimating Effects from Time-Varying Aquatic Exposures: Technical Documentation

    Science.gov (United States)

    The Office of Pesticide Programs models daily aquatic pesticide exposure values for 30 years in its risk assessments. However, only a fraction of that information is typically used in these assessments. The population model employed herein is a deterministic, density-dependent pe...

  15. A kinematic model to estimate the effective dose of radioactive isotopes in the human body for radiological protection

    Science.gov (United States)

    Sasaki, S.; Yamada, T.

    2013-12-01

    The great earthquake attacked the north-east area in Japan in March 11, 2011. The system of electrical facilities to control Fukushima Daiichi nuclear power station was completely destroyed by the following tsunamis. From the damaged reactor containment vessels, an amount of radioactive substances had leaked and been diffused in the vicinity of this station. Radiological internal exposure becomes a serious social issue both in Japan and all over the world. The present study provides an easily understandable, kinematic-based model to estimate the effective dose of radioactive substances in a human body by simplified the complicated mechanism of metabolism. International Commission on Radiological Protection (ICRP) has developed an exact model, which is well-known as a standard method to calculate the effective dose for radiological protection. However, owing to that the above method accord too much with the actual mechanism of metabolism in human bodies, it becomes rather difficult for non-professional people of radiology to gasp the whole images of the movement and the influences of radioactive substances in a human body. Therefore, in the present paper we propose a newly-derived and easily-understandable model to estimate the effective dose. The present method is very similar with the traditional and conventional hydrological tank model. Ingestion flux of radioactive substances corresponds to rain intensity and the storage of radioactive substances to the water storage in a basin in runoff analysis. The key of this method is to estimate the energy radiated from the radioactive nuclear disintegration of an atom by using classical theory of E. Fermi of beta decay and special relativity for various kinds of radioactive atoms. The parameters used in this study are only physical half-time and biological half-time, and there are no intentional and operational parameters of coefficients to adjust our theoretical runoff to observation of ICRP. Figure.1 compares time

  16. On the representation of aerosol activation and its influence on model-derived estimates of the aerosol indirect effect

    Science.gov (United States)

    Rothenberg, Daniel; Avramov, Alexander; Wang, Chien

    2018-06-01

    Interactions between aerosol particles and clouds contribute a great deal of uncertainty to the scientific community's understanding of anthropogenic climate forcing. Aerosol particles serve as the nucleation sites for cloud droplets, establishing a direct linkage between anthropogenic particulate emissions and clouds in the climate system. To resolve this linkage, the community has developed parameterizations of aerosol activation which can be used in global climate models to interactively predict cloud droplet number concentrations (CDNCs). However, different activation schemes can exhibit different sensitivities to aerosol perturbations in different meteorological or pollution regimes. To assess the impact these different sensitivities have on climate forcing, we have coupled three different core activation schemes and variants with the CESM-MARC (two-Moment, Multi-Modal, Mixing-state-resolving Aerosol model for Research of Climate (MARC) coupled with the National Center for Atmospheric Research's (NCAR) Community Earth System Model (CESM; version 1.2)). Although the model produces a reasonable present-day CDNC climatology when compared with observations regardless of the scheme used, ΔCDNCs between the present and preindustrial era regionally increase by over 100 % in zonal mean when using the most sensitive parameterization. These differences in activation sensitivity may lead to a different evolution of the model meteorology, and ultimately to a spread of over 0.8 W m-2 in global average shortwave indirect effect (AIE) diagnosed from the model, a range which is as large as the inter-model spread from the AeroCom intercomparison. Model-derived AIE strongly scales with the simulated preindustrial CDNC burden, and those models with the greatest preindustrial CDNC tend to have the smallest AIE, regardless of their ΔCDNC. This suggests that present-day evaluations of aerosol-climate models may not provide useful constraints on the magnitude of the AIE, which

  17. Amplitude Models for Discrimination and Yield Estimation

    Energy Technology Data Exchange (ETDEWEB)

    Phillips, William Scott [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-09-01

    This seminar presentation describes amplitude models and yield estimations that look at the data in order to inform legislation. The following points were brought forth in the summary: global models that will predict three-component amplitudes (R-T-Z) were produced; Q models match regional geology; corrected source spectra can be used for discrimination and yield estimation; three-component data increase coverage and reduce scatter in source spectral estimates; three-component efforts must include distance-dependent effects; a community effort on instrument calibration is needed.

  18. Estimating network effects in China's mobile telecommunications

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    A model is proposed along with empirical investigation to prove the existence of network effects in China's mobile telecommunications market. Futhernore, network effects on China's mobile telecommunications are estimated with a dynamic model. The structural parameters are identified from regression coefficients and the results are analyzed and compared with another literature. Data and estimation issues are also discussed. Conclusions are drawn that network effects are significant in China's mobile telecommunications market, and that ignoring network effects leads to bad policy making.

  19. The effects of spatial heterogeneity and subsurface lateral transfer on evapotranspiration estimates in large scale Earth system models

    Science.gov (United States)

    Rouholahnejad, E.; Fan, Y.; Kirchner, J. W.; Miralles, D. G.

    2017-12-01

    Most Earth system models (ESM) average over considerable sub-grid heterogeneity in land surface properties, and overlook subsurface lateral flow. This could potentially bias evapotranspiration (ET) estimates and has implications for future temperature predictions, since overestimations in ET imply greater latent heat fluxes and potential underestimation of dry and warm conditions in the context of climate change. Here we quantify the bias in evaporation estimates that may arise from the fact that ESMs average over considerable heterogeneity in surface properties, and also neglect lateral transfer of water across the heterogeneous landscapes at global scale. We use a Budyko framework to express ET as a function of P and PET to derive simple sub-grid closure relations that quantify how spatial heterogeneity and lateral transfer could affect average ET as seen from the atmosphere. We show that averaging over sub-grid heterogeneity in P and PET, as typical Earth system models do, leads to overestimation of average ET. Our analysis at global scale shows that the effects of sub-grid heterogeneity will be most pronounced in steep mountainous areas where the topographic gradient is high and where P is inversely correlated with PET across the landscape. In addition, we use the Total Water Storage (TWS) anomaly estimates from the Gravity Recovery and Climate Experiment (GRACE) remote sensing product and assimilate it into the Global Land Evaporation Amsterdam Model (GLEAM) to correct for existing free drainage lower boundary condition in GLEAM and quantify whether, and how much, accounting for changes in terrestrial storage can improve the simulation of soil moisture and regional ET fluxes at global scale.

  20. Simple algorithm to estimate mean-field effects from minor differential permeability curves based on the Preisach model

    International Nuclear Information System (INIS)

    Perevertov, Oleksiy

    2003-01-01

    The classical Preisach model (PM) of magnetic hysteresis requires that any minor differential permeability curve lies under minor curves with larger field amplitude. Measurements of ferromagnetic materials show that very often this is not true. By applying the classical PM formalism to measured minor curves one can discover that it leads to an oval-shaped region on each half of the Preisach plane where the calculations produce negative values in the Preisach function. Introducing an effective field, which differs from the applied one by a mean-field term proportional to the magnetization, usually solves this problem. Complex techniques exist to estimate the minimum necessary proportionality constant (the moving parameter). In this paper we propose a simpler way to estimate the mean-field effects for use in nondestructive testing, which is based on experience from the measurements of industrial steels. A new parameter (parameter of shift) is introduced, which monitors the mean-field effects. The relation between the shift parameter and the moving one was studied for a number of steels. From preliminary experiments no correlation was found between the shift parameter and the classical magnetic ones such as the coercive field, maximum differential permeability and remanent magnetization

  1. Estimation of effectiveness of three methods of feral cat population control by use of a simulation model.

    Science.gov (United States)

    McCarthy, Robert J; Levine, Stephen H; Reed, J Michael

    2013-08-15

    To predict effectiveness of 3 interventional methods of population control for feral cat colonies. Population model. Estimates of vital data for feral cats. Data were gathered from the literature regarding the demography and mating behavior of feral cats. An individual-based stochastic simulation model was developed to evaluate the effectiveness of trap-neuter-release (TNR), lethal control, and trap-vasectomy-hysterectomy-release (TVHR) in decreasing the size of feral cat populations. TVHR outperformed both TNR and lethal control at all annual capture probabilities between 10% and 90%. Unless > 57% of cats were captured and neutered annually by TNR or removed by lethal control, there was minimal effect on population size. In contrast, with an annual capture rate of ≥ 35%, TVHR caused population size to decrease. An annual capture rate of 57% eliminated the modeled population in 4,000 days by use of TVHR, whereas > 82% was required for both TNR and lethal control. When the effect of fraction of adult cats neutered on kitten and young juvenile survival rate was included in the analysis, TNR performed progressively worse and could be counterproductive, such that population size increased, compared with no intervention at all. TVHR should be preferred over TNR for management of feral cats if decrease in population size is the goal. This model allowed for many factors related to the trapping program and cats to be varied and should be useful for determining the financial and person-effort commitments required to have a desired effect on a given feral cat population.

  2. Estimating Price Effects in an Almost Ideal Demand Model of Outbound Thai Tourism to East Asia

    NARCIS (Netherlands)

    C-L. Chang (Chia-Lin); T. Khamkaew (Tanchanok); M.J. McAleer (Michael)

    2010-01-01

    textabstractThis paper analyzes the responsiveness of Thai outbound tourism to East Asian destinations, namely China, Hong Kong, Japan, Taiwan and Korea, to changes in effective relative price of tourism, total real total tourism expenditure, and one-off events. The nonlinear and linear Almost Ideal

  3. Effects of model choice and forest structure on inventory-based estimations of Puerto Rican forest biomass

    Science.gov (United States)

    Thomas J. Brandeis; Maria Del Rocio; Suarez Rozo

    2005-01-01

    Total aboveground live tree biomass in Puerto Rican lower montane wet, subtropical wet, subtropical moist and subtropical dry forests was estimated using data from two forest inventories and published regression equations. Multiple potentially-applicable published biomass models existed for some forested life zones, and their estimates tended to diverge with increasing...

  4. THE EFFECT OF WINDOW FUNCTIONS ON THE ARMA MODEL PARAMETERS ESTIMATED BY PORLA METHOD

    Directory of Open Access Journals (Sweden)

    Şaban ÖZER

    2002-02-01

    Full Text Available PORLA metodu ile tahmin edilen ARMA model parametreleri üzerinde pencere fonksiyonlarının etkisi, sunulmuştur. PORLA metotu, izleme ve modelleme problemlerinin bağımsız alt-algoritmalar olarak düşünüldüğü bir algoritma yapısına sahiptir. Bu metotda, ilk olarak durağan olmayan veri izlemesi, giriş/çıkış veri kovaryans blok matrisinin zaman ardışımlı hesaplanması ile gerçekleştirilir. İkinci olarak, modelleme problemi, ARMA modelleme probleminin içerildiği iki-kanallı PORLA metodu ile çözülür. Zamanla hata yayılımı, PORLA metodunda oluşamaz. İsteğe bağlı pencereleme teknikleri, izleme kapasitesini ve hızlıbaşlamayı kontrol etmek için kolayca dahil edilebilir. PORLA metodu ile tahmin edilen ARMA model parametreleri üzerinde pencere fonksiyonlarının etkisini göstermek için, dikdörtgen, üçgen, Bartlett, Hanning, Hamming, üstel, değiştirilmiş Barnwell, Blackman ve Kaiser gibi farklı pencere fonksiyonları kullanılarak elde edilen benzetim sonuçları verilmiştir.

  5. Model for Estimating Power and Downtime Effects on Teletherapy Units in Low-Resource Settings

    Directory of Open Access Journals (Sweden)

    Rachel McCarroll

    2017-10-01

    Full Text Available Purpose: More than 6,500 megavoltage teletherapy units are needed worldwide, many in low-resource settings. Cobalt-60 units or linear accelerators (linacs can fill this need. We have evaluated machine performance on the basis of patient throughput to provide insight into machine viability under various conditions in such a way that conclusions can be generalized to a vast array of clinical scenarios. Materials and Methods: Data from patient treatment plans, peer-reviewed studies, and international organizations were combined to assess the relative patient throughput of linacs and cobalt-60 units that deliver radiotherapy with standard techniques under various power and maintenance support conditions. Data concerning the frequency and duration of power outages and downtime characteristics of the machines were used to model teletherapy operation in low-resource settings. Results: Modeled average daily throughput was decreased for linacs because of lack of power infrastructure and for cobalt-60 units because of limited and decaying source strength. For conformal radiotherapy delivered with multileaf collimators, average daily patient throughput over 8 years of operation was equal for cobalt-60 units and linacs when an average of 1.83 hours of power outage occurred per 10-hour working day. Relative to conformal treatments delivered with multileaf collimators on the respective machines, the use of advanced techniques on linacs decreased throughput between 20% and 32% and, for cobalt machines, the need to manually place blocks reduced throughput up to 37%. Conclusion: Our patient throughput data indicate that cobalt-60 units are generally best suited for implementation when machine operation might be 70% or less of total operable time because of power outages or mechanical repair. However, each implementation scenario is unique and requires consideration of all variables affecting implementation.

  6. INTEGRATED SPEED ESTIMATION MODEL FOR MULTILANE EXPREESSWAYS

    Science.gov (United States)

    Hong, Sungjoon; Oguchi, Takashi

    In this paper, an integrated speed-estimation model is developed based on empirical analyses for the basic sections of intercity multilane expressway un der the uncongested condition. This model enables a speed estimation for each lane at any site under arb itrary highway-alignment, traffic (traffic flow and truck percentage), and rainfall conditions. By combin ing this model and a lane-use model which estimates traffic distribution on the lanes by each vehicle type, it is also possible to es timate an average speed across all the lanes of one direction from a traffic demand by vehicle type under specific highway-alignment and rainfall conditions. This model is exp ected to be a tool for the evaluation of traffic performance for expressways when the performance me asure is travel speed, which is necessary for Performance-Oriented Highway Planning and Design. Regarding the highway-alignment condition, two new estimators, called effective horizo ntal curvature and effective vertical grade, are proposed in this paper which take into account the influence of upstream and downstream alignment conditions. They are applied to the speed-estimation model, and it shows increased accuracy of the estimation.

  7. A statistical model for estimating maternal-zygotic interactions and parent-of-origin effects of QTLs for seed development.

    Directory of Open Access Journals (Sweden)

    Yanchun Li

    Full Text Available Proper development of a seed requires coordinated exchanges of signals among the three components that develop side by side in the seed. One of these is the maternal integument that encloses the other two zygotic components, i.e., the diploid embryo and its nurturing annex, the triploid endosperm. Although the formation of the embryo and endosperm contains the contributions of both maternal and paternal parents, maternally and paternally derived alleles may be expressed differently, leading to a so-called parent-of-origin or imprinting effect. Currently, the nature of how genes from the maternal and zygotic genomes interact to affect seed development remains largely unknown. Here, we present a novel statistical model for estimating the main and interaction effects of quantitative trait loci (QTLs that are derived from different genomes and further testing the imprinting effects of these QTLs on seed development. The experimental design used is based on reciprocal backcrosses toward both parents, so that the inheritance of parent-specific alleles could be traced. The computing model and algorithm were implemented with the maximum likelihood approach. The new strategy presented was applied to study the mode of inheritance for QTLs that control endoreduplication traits in maize endosperm. Monte Carlo simulation studies were performed to investigate the statistical properties of the new model with the data simulated under different imprinting degrees. The false positive rate of imprinting QTL discovery by the model was examined by analyzing the simulated data that contain no imprinting QTL. The reciprocal design and a series of analytical and testing strategies proposed provide a standard procedure for genomic mapping of QTLs involved in the genetic control of complex seed development traits in flowering plants.

  8. A smoothed maximum score estimator for the binary choice panel data model with individual fixed effects and applications to labour force participation

    NARCIS (Netherlands)

    Charlier, G.W.P.

    1994-01-01

    In a binary choice panel data model with individual effects and two time periods, Manski proposed the maximum score estimator, based on a discontinuous objective function, and proved its consistency under weak distributional assumptions. However, the rate of convergence of this estimator is low (N)

  9. The effect of water storage change in ET estimation in humid catchments based on water balance models and Budyko framework

    Science.gov (United States)

    Wang, Tingting; Sun, Fubao; Liu, Changming; Liu, Wenbin; Wang, Hong

    2017-04-01

    An accurate estimation of ET in humid catchments is essential in water-energy budget research and water resource management etc, while it remains a huge challenge and there is no well accepted explanation for the difficulty of annual ET estimation in humid catchments so far. Here we presents the ET estimation in 102 humid catchments over China based on the Budyko framework and two hydrological models: abcd model and Xin'anjiang mdoel, in comparison with ET calculated from the water balance equation (ETwb) on the ground that the ΔS is approximately zero at multiannual and annual time scale. We provides a possible explanation for this poorly annual ET estimation in humid catchments as well. The results show that at multi-annual timescale, the Budyko framework works fine in ET estimation in humid catchments, while at annual time scale, neither the Budyko framework nor the hydrological models can estimate ET well. The major cause for this poorly estimated annual ET in humid catchments is the neglecting of the ΔS in ETwb since it enlarge the variability of real actual evapotranspiration. Much improvement has been made when compared estimated ET + ΔS with those ETwb, and the bigger the catchment area is, the better this improvement is. It provides a reasonable explanation for the poorly estimated annual ET in humid catchments and reveals the important role of the ΔS in ET estimation and validation. We highlight that the annual ΔS shouldn't be taken as zero in water balance equation in humid catchments.

  10. Estimating haplotype effects for survival data

    DEFF Research Database (Denmark)

    Scheike, Thomas; Martinussen, Torben; Silver, J

    2010-01-01

    Genetic association studies often investigate the effect of haplotypes on an outcome of interest. Haplotypes are not observed directly, and this complicates the inclusion of such effects in survival models. We describe a new estimating equations approach for Cox's regression model to assess haplo...

  11. Estimation of Causal Mediation Effects for a Dichotomous Outcome in Multiple-Mediator Models using the Mediation Formula

    OpenAIRE

    Wang, Wei; Nelson, Suchitra; Albert, Jeffrey M.

    2013-01-01

    Mediators are intermediate variables in the causal pathway between an exposure and an outcome. Mediation analysis investigates the extent to which exposure effects occur through these variables, thus revealing causal mechanisms. In this paper, we consider the estimation of the mediation effect when the outcome is binary and multiple mediators of different types exist. We give a precise definition of the total mediation effect as well as decomposed mediation effects through individual or sets ...

  12. Model for traffic emissions estimation

    Science.gov (United States)

    Alexopoulos, A.; Assimacopoulos, D.; Mitsoulis, E.

    A model is developed for the spatial and temporal evaluation of traffic emissions in metropolitan areas based on sparse measurements. All traffic data available are fully employed and the pollutant emissions are determined with the highest precision possible. The main roads are regarded as line sources of constant traffic parameters in the time interval considered. The method is flexible and allows for the estimation of distributed small traffic sources (non-line/area sources). The emissions from the latter are assumed to be proportional to the local population density as well as to the traffic density leading to local main arteries. The contribution of moving vehicles to air pollution in the Greater Athens Area for the period 1986-1988 is analyzed using the proposed model. Emissions and other related parameters are evaluated. Emissions from area sources were found to have a noticeable share of the overall air pollution.

  13. A study of health effect estimates using competing methods to model personal exposures to ambient PM2.5.

    Science.gov (United States)

    Strand, Matthew; Hopke, Philip K; Zhao, Weixiang; Vedal, Sverre; Gelfand, Erwin; Rabinovitch, Nathan

    2007-09-01

    Various methods have been developed recently to estimate personal exposures to ambient particulate matter less than 2.5 microm in diameter (PM2.5) using fixed outdoor monitors as well as personal exposure monitors. One class of estimators involves extrapolating values using ambient-source components of PM2.5, such as sulfate and iron. A key step in extrapolating these values is to correct for differences in infiltration characteristics of the component used in extrapolation (such as sulfate within PM2.5) and PM2.5. When this is not done, resulting health effect estimates will be biased. Another class of approaches involves factor analysis methods such as positive matrix factorization (PMF). Using either an extrapolation or a factor analysis method in conjunction with regression calibration allows one to estimate the direct effects of ambient PM2.5 on health, eliminating bias caused by using fixed outdoor monitors and estimated personal ambient PM2.5 concentrations. Several forms of the extrapolation method are defined, including some new ones. Health effect estimates that result from the use of these methods are compared with those from an expanded PMF analysis using data collected from a health study of asthmatic children conducted in Denver, Colorado. Examining differences in health effect estimates among the various methods using a measure of lung function (forced expiratory volume in 1 s) as the health indicator demonstrated the importance of the correction factor(s) in the extrapolation methods and that PMF yielded results comparable with the extrapolation methods that incorporated correction factors.

  14. Instrumental variables estimation of exposure effects on a time-to-event endpoint using structural cumulative survival models

    DEFF Research Database (Denmark)

    Martinussen, Torben; Vansteelandt, Stijn; Tchetgen Tchetgen, Eric J.

    2017-01-01

    The use of instrumental variables for estimating the effect of an exposure on an outcome is popular in econometrics, and increasingly so in epidemiology. This increasing popularity may be attributed to the natural occurrence of instrumental variables in observational studies that incorporate elem...

  15. Experimental Effects and Individual Differences in Linear Mixed Models: Estimating the Relationship between Spatial, Object, and Attraction Effects in Visual Attention

    Science.gov (United States)

    Kliegl, Reinhold; Wei, Ping; Dambacher, Michael; Yan, Ming; Zhou, Xiaolin

    2011-01-01

    Linear mixed models (LMMs) provide a still underused methodological perspective on combining experimental and individual-differences research. Here we illustrate this approach with two-rectangle cueing in visual attention (Egly et al., 1994). We replicated previous experimental cue-validity effects relating to a spatial shift of attention within an object (spatial effect), to attention switch between objects (object effect), and to the attraction of attention toward the display centroid (attraction effect), also taking into account the design-inherent imbalance of valid and other trials. We simultaneously estimated variance/covariance components of subject-related random effects for these spatial, object, and attraction effects in addition to their mean reaction times (RTs). The spatial effect showed a strong positive correlation with mean RT and a strong negative correlation with the attraction effect. The analysis of individual differences suggests that slow subjects engage attention more strongly at the cued location than fast subjects. We compare this joint LMM analysis of experimental effects and associated subject-related variances and correlations with two frequently used alternative statistical procedures. PMID:21833292

  16. Fuzzy model to estimate the number of hospitalizations for asthma and pneumonia under the effects of air pollution.

    Science.gov (United States)

    Chaves, Luciano Eustáquio; Nascimento, Luiz Fernando Costa; Rizol, Paloma Maria Silva Rocha

    2017-06-22

    Predict the number of hospitalizations for asthma and pneumonia associated with exposure to air pollutants in the city of São José dos Campos, São Paulo State. This is a computational model using fuzzy logic based on Mamdani's inference method. For the fuzzification of the input variables of particulate matter, ozone, sulfur dioxide and apparent temperature, we considered two relevancy functions for each variable with the linguistic approach: good and bad. For the output variable number of hospitalizations for asthma and pneumonia, we considered five relevancy functions: very low, low, medium, high and very high. DATASUS was our source for the number of hospitalizations in the year 2007 and the result provided by the model was correlated with the actual data of hospitalization with lag from zero to two days. The accuracy of the model was estimated by the ROC curve for each pollutant and in those lags. In the year of 2007, 1,710 hospitalizations by pneumonia and asthma were recorded in São José dos Campos, State of São Paulo, with a daily average of 4.9 hospitalizations (SD = 2.9). The model output data showed positive and significant correlation (r = 0.38) with the actual data; the accuracies evaluated for the model were higher for sulfur dioxide in lag 0 and 2 and for particulate matter in lag 1. Fuzzy modeling proved accurate for the pollutant exposure effects and hospitalization for pneumonia and asthma approach. Prever o número de internações por asma e pneumonia associadas à exposição a poluentes do ar no município em São José dos Campos, estado de São Paulo. Trata-se de um modelo computacional que utiliza a lógica fuzzy baseado na técnica de inferência de Mamdani. Para a fuzzificação das variáveis de entrada material particulado, ozônio, dióxido de enxofre e temperatura aparente foram consideradas duas funções de pertinência para cada variável com abordagem linguísticas: bom e ruim. Para a variável de saída número interna

  17. Estimating haplotype effects for survival data.

    Science.gov (United States)

    Scheike, Thomas H; Martinussen, Torben; Silver, Jeremy D

    2010-09-01

    Genetic association studies often investigate the effect of haplotypes on an outcome of interest. Haplotypes are not observed directly, and this complicates the inclusion of such effects in survival models. We describe a new estimating equations approach for Cox's regression model to assess haplotype effects for survival data. These estimating equations are simple to implement and avoid the use of the EM algorithm, which may be slow in the context of the semiparametric Cox model with incomplete covariate information. These estimating equations also lead to easily computable, direct estimators of standard errors, and thus overcome some of the difficulty in obtaining variance estimators based on the EM algorithm in this setting. We also develop an easily implemented goodness-of-fit procedure for Cox's regression model including haplotype effects. Finally, we apply the procedures presented in this article to investigate possible haplotype effects of the PAF-receptor on cardiovascular events in patients with coronary artery disease, and compare our results to those based on the EM algorithm. © 2009, The International Biometric Society.

  18. Mixed-effects models for estimating stand volume by means of small footprint airborne laser scanner data.

    Science.gov (United States)

    J. Breidenbach; E. Kublin; R. McGaughey; H.-E. Andersen; S. Reutebuch

    2008-01-01

    For this study, hierarchical data sets--in that several sample plots are located within a stand--were analyzed for study sites in the USA and Germany. The German data had an additional hierarchy as the stands are located within four distinct public forests. Fixed-effects models and mixed-effects models with a random intercept on the stand level were fit to each data...

  19. Pedigree-based estimation of covariance between dominance deviations and additive genetic effects in closed rabbit lines considering inbreeding and using a computationally simpler equivalent model.

    Science.gov (United States)

    Fernández, E N; Legarra, A; Martínez, R; Sánchez, J P; Baselga, M

    2017-06-01

    Inbreeding generates covariances between additive and dominance effects (breeding values and dominance deviations). In this work, we developed and applied models for estimation of dominance and additive genetic variances and their covariance, a model that we call "full dominance," from pedigree and phenotypic data. Estimates with this model such as presented here are very scarce both in livestock and in wild genetics. First, we estimated pedigree-based condensed probabilities of identity using recursion. Second, we developed an equivalent linear model in which variance components can be estimated using closed-form algorithms such as REML or Gibbs sampling and existing software. Third, we present a new method to refer the estimated variance components to meaningful parameters in a particular population, i.e., final partially inbred generations as opposed to outbred base populations. We applied these developments to three closed rabbit lines (A, V and H) selected for number of weaned at the Polytechnic University of Valencia. Pedigree and phenotypes are complete and span 43, 39 and 14 generations, respectively. Estimates of broad-sense heritability are 0.07, 0.07 and 0.05 at the base versus 0.07, 0.07 and 0.09 in the final generations. Narrow-sense heritability estimates are 0.06, 0.06 and 0.02 at the base versus 0.04, 0.04 and 0.01 at the final generations. There is also a reduction in the genotypic variance due to the negative additive-dominance correlation. Thus, the contribution of dominance variation is fairly large and increases with inbreeding and (over)compensates for the loss in additive variation. In addition, estimates of the additive-dominance correlation are -0.37, -0.31 and 0.00, in agreement with the few published estimates and theoretical considerations. © 2017 Blackwell Verlag GmbH.

  20. Estimating Stochastic Volatility Models using Prediction-based Estimating Functions

    DEFF Research Database (Denmark)

    Lunde, Asger; Brix, Anne Floor

    to the performance of the GMM estimator based on conditional moments of integrated volatility from Bollerslev and Zhou (2002). The case where the observed log-price process is contaminated by i.i.d. market microstructure (MMS) noise is also investigated. First, the impact of MMS noise on the parameter estimates from......In this paper prediction-based estimating functions (PBEFs), introduced in Sørensen (2000), are reviewed and PBEFs for the Heston (1993) stochastic volatility model are derived. The finite sample performance of the PBEF based estimator is investigated in a Monte Carlo study, and compared...... to correctly account for the noise are investigated. Our Monte Carlo study shows that the estimator based on PBEFs outperforms the GMM estimator, both in the setting with and without MMS noise. Finally, an empirical application investigates the possible challenges and general performance of applying the PBEF...

  1. Effects of Initial Values and Convergence Criterion in the Two-Parameter Logistic Model When Estimating the Latent Distribution in BILOG-MG 3.

    Directory of Open Access Journals (Sweden)

    Ingo W Nader

    Full Text Available Parameters of the two-parameter logistic model are generally estimated via the expectation-maximization algorithm, which improves initial values for all parameters iteratively until convergence is reached. Effects of initial values are rarely discussed in item response theory (IRT, but initial values were recently found to affect item parameters when estimating the latent distribution with full non-parametric maximum likelihood. However, this method is rarely used in practice. Hence, the present study investigated effects of initial values on item parameter bias and on recovery of item characteristic curves in BILOG-MG 3, a widely used IRT software package. Results showed notable effects of initial values on item parameters. For tighter convergence criteria, effects of initial values decreased, but item parameter bias increased, and the recovery of the latent distribution worsened. For practical application, it is advised to use the BILOG default convergence criterion with appropriate initial values when estimating the latent distribution from data.

  2. Estimating direct and indirect rebound effects by supply-driven input-output model: A case study of Taiwan's industry

    International Nuclear Information System (INIS)

    Wu, Kuei-Yen; Wu, Jung-Hua; Huang, Yun-Hsun; Fu, Szu-Chi; Chen, Chia-Yon

    2016-01-01

    Most existing literature focuses on the direct rebound effect on the demand side for consumers. This study analyses direct and indirect rebound effects in Taiwan's industry from the perspective of producers. However, most studies on the producers' viewpoint may overlook inter-industry linkages. This study applies a supply-driven input-output model to quantify the magnitude of rebound effects by explicitly considering inter-industry linkages. Empirical results showed that total rebound effects for most Taiwan's sectors were less than 10% in 2011. A comparison among the sectors yields that sectors with lower energy efficiency had higher direct rebound effects, while sectors with higher forward linkages generated higher indirect rebound effects. Taking the Mining sector (S3) as an example, which is an upstream supplier and has high forward linkages; it showed high indirect rebound effects that are derived from the accumulation of additional energy consumption by its downstream producers. The findings also showed that in almost all sectors, indirect rebound effects were higher than direct rebound effects. In other words, if indirect rebound effects are neglected, the total rebound effects will be underestimated. Hence, the energy-saving potential may be overestimated. - Highlights: • This study quantifies rebound effects by a supply-driven input-output model. • For most Taiwan's sectors, total rebound magnitudes were less than 10% in 2011. • Direct rebound effects and energy efficiency were inverse correlation. • Indirect rebound effects and industrial forward linkages were positive correlation. • Indirect rebound effects were generally higher than direct rebound effects.

  3. A Bivariate Mixture Model for Natural Antibody Levels to Human Papillomavirus Types 16 and 18: Baseline Estimates for Monitoring the Herd Effects of Immunization.

    Directory of Open Access Journals (Sweden)

    Margaretha A Vink

    Full Text Available Post-vaccine monitoring programs for human papillomavirus (HPV have been introduced in many countries, but HPV serology is still an underutilized tool, partly owing to the weak antibody response to HPV infection. Changes in antibody levels among non-vaccinated individuals could be employed to monitor herd effects of immunization against HPV vaccine types 16 and 18, but inference requires an appropriate statistical model. The authors developed a four-component bivariate mixture model for jointly estimating vaccine-type seroprevalence from correlated antibody responses against HPV16 and -18 infections. This model takes account of the correlation between HPV16 and -18 antibody concentrations within subjects, caused e.g. by heterogeneity in exposure level and immune response. The model was fitted to HPV16 and -18 antibody concentrations as measured by a multiplex immunoassay in a large serological survey (3,875 females carried out in the Netherlands in 2006/2007, before the introduction of mass immunization. Parameters were estimated by Bayesian analysis. We used the deviance information criterion for model selection; performance of the preferred model was assessed through simulation. Our analysis uncovered elevated antibody concentrations in doubly as compared to singly seropositive individuals, and a strong clustering of HPV16 and -18 seropositivity, particularly around the age of sexual debut. The bivariate model resulted in a more reliable classification of singly and doubly seropositive individuals than achieved by a combination of two univariate models, and suggested a higher pre-vaccine HPV16 seroprevalence than previously estimated. The bivariate mixture model provides valuable baseline estimates of vaccine-type seroprevalence and may prove useful in seroepidemiologic assessment of the herd effects of HPV vaccination.

  4. The Effect of Off-Farm Employment on Forestland Transfers in China: A Simultaneous-Equation Tobit Model Estimation

    Directory of Open Access Journals (Sweden)

    Han Zhang

    2017-09-01

    Full Text Available China’s new round tenure reform has devolved collective forests to individuals on an egalitarian basis. To balance the equity–efficiency dilemma, forestland transfers are highly advocated by policymakers. However, the forestland rental market is still inactive after the reform. To examine the role of off-farm employment in forestland transfers, a simultaneous Tobit system of equations was employed to account for the endogeneity, interdependency, and censoring issues. Accordingly, the Nelson–Olson two-stage procedure, embedded with a multivariate Tobit estimator, was applied to a nationally representative dataset. The estimation results showed that off-farm employment plays a significantly negative role in forestland rent-in, at the 5% risk level. However, off-farm activities had no significant effect on forestland rent-out. Considering China’s specific situation, a reasonable explanation is that households hold forestland as a crucial means of social security against the risk of unemployment. In both rent-in and rent-out equations, high transaction costs are one of the main obstacles impeding forestland transfer. A remarkable finding was that forestland transactions occurred with a statistically significant factor equalization effect, which would be helpful to adjust the mismatched labor–land ratio and improve the land-use efficiency.

  5. Nonparametric estimation in models for unobservable heterogeneity

    OpenAIRE

    Hohmann, Daniel

    2014-01-01

    Nonparametric models which allow for data with unobservable heterogeneity are studied. The first publication introduces new estimators and their asymptotic properties for conditional mixture models. The second publication considers estimation of a function from noisy observations of its Radon transform in a Gaussian white noise model.

  6. MCMC estimation of multidimensional IRT models

    NARCIS (Netherlands)

    Beguin, Anton; Glas, Cornelis A.W.

    1998-01-01

    A Bayesian procedure to estimate the three-parameter normal ogive model and a generalization to a model with multidimensional ability parameters are discussed. The procedure is a generalization of a procedure by J. Albert (1992) for estimating the two-parameter normal ogive model. The procedure will

  7. Potential Geophysical Field Transformations and Combined 3D Modelling for Estimation the Seismic Site Effects on Example of Israel

    Science.gov (United States)

    Eppelbaum, Lev; Meirova, Tatiana

    2015-04-01

    , EGU2014-2424, Vienna, Austria, 1-5. Eppelbaum, L.V. and Katz, Y.I., 2014b. First Maps of Mesozoic and Cenozoic Structural-Sedimentation Floors of the Easternmost Mediterranean and their Relationship with the Deep Geophysical-Geological Zonation. Proceed. of the 19th Intern. Congress of Sedimentologists, Geneva, Switzerland, 1-3. Eppelbaum, L.V. and Katz, Yu.I., 2015a. Newly Developed Paleomagnetic Map of the Easternmost Mediterranean Unmasks Geodynamic History of this Region. Central European Jour. of Geosciences, 6, No. 4 (in Press). Eppelbaum, L.V. and Katz, Yu.I., 2015b. Application of Integrated Geological-Geophysical Analysis for Development of Paleomagnetic Maps of the Easternmost Mediterranean. In: (Eppelbaum L., Ed.), New Developments in Paleomagnetism Research, Nova Publisher, NY (in Press). Eppelbaum, L.V. and Khesin, B.E., 2004. Advanced 3-D modelling of gravity field unmasks reserves of a pyrite-polymetallic deposit: A case study from the Greater Caucasus. First Break, 22, No. 11, 53-56. Eppelbaum, L.V., Nikolaev, A.V. and Katz, Y.I., 2014. Space location of the Kiama paleomagnetic hyperzone of inverse polarity in the crust of the eastern Mediterranean. Doklady Earth Sciences (Springer), 457, No. 6, 710-714. Haase, J.S., Park, C.H., Nowack, R.L. and Hill, J.R., 2010. Probabilistic seismic hazard estimates incorporating site effects - An example from Indiana, U.S.A. Environmental and Engineering Geoscience, 16, No. 4, 369-388. Hough, S.E., Borcherdt, R. D., Friberg, P. A., Busby, R., Field, E. and Jacob, K. N., 1990. The role of sediment-induced amplification in the collapse of the Nimitz freeway. Nature, 344, 853-855. Khesin, B.E. Alexeyev, V.V. and Eppelbaum, L.V., 1996. Interpretation of Geophysical Fields in Complicated Environments. Kluwer Academic Publ., Ser.: Advanced Appr. in Geophysics, Dordrecht - London - Boston. Klokočník, J., Kostelecký, J., Eppelbaum, L. and Bezděk, A., 2014. Gravity Disturbances, the Marussi Tensor, Invariants and

  8. Estimating Canopy Dark Respiration for Crop Models

    Science.gov (United States)

    Monje Mejia, Oscar Alberto

    2014-01-01

    Crop production is obtained from accurate estimates of daily carbon gain.Canopy gross photosynthesis (Pgross) can be estimated from biochemical models of photosynthesis using sun and shaded leaf portions and the amount of intercepted photosyntheticallyactive radiation (PAR).In turn, canopy daily net carbon gain can be estimated from canopy daily gross photosynthesis when canopy dark respiration (Rd) is known.

  9. Multivariate dynamic linear models for estimating the effect of experimental interventions in an evolutionary operations setup in dairy herds

    DEFF Research Database (Denmark)

    Stygar, Anna Helena; Krogh, Mogens Agerbo; Kristensen, Troels

    2017-01-01

    Evolutionary operations is a method to exploit the association of often small changes in process variables, planned during systematic experimentation and occurring during the normal production flow, to production characteristics to find a way to alter the production process to be more efficient....... The objective of this study was to construct a tool to assess the intervention effect on milk production in an evolutionary operations setup. The method used for this purpose was a dynamic linear model (DLM) with Kalman filtering. The DLM consisted of parameters describing milk yield in a herd, individual cows...... bulk tank records. The presented model proved to be a flexible and dynamic tool, and it was successfully applied for systematic experimentation in dairy herds. The model can serve as a decision support tool for on-farm process optimization exploiting planned changes in process variables...

  10. Parameter Estimation of Spacecraft Fuel Slosh Model

    Science.gov (United States)

    Gangadharan, Sathya; Sudermann, James; Marlowe, Andrea; Njengam Charles

    2004-01-01

    Fuel slosh in the upper stages of a spinning spacecraft during launch has been a long standing concern for the success of a space mission. Energy loss through the movement of the liquid fuel in the fuel tank affects the gyroscopic stability of the spacecraft and leads to nutation (wobble) which can cause devastating control issues. The rate at which nutation develops (defined by Nutation Time Constant (NTC can be tedious to calculate and largely inaccurate if done during the early stages of spacecraft design. Pure analytical means of predicting the influence of onboard liquids have generally failed. A strong need exists to identify and model the conditions of resonance between nutation motion and liquid modes and to understand the general characteristics of the liquid motion that causes the problem in spinning spacecraft. A 3-D computerized model of the fuel slosh that accounts for any resonant modes found in the experimental testing will allow for increased accuracy in the overall modeling process. Development of a more accurate model of the fuel slosh currently lies in a more generalized 3-D computerized model incorporating masses, springs and dampers. Parameters describing the model include the inertia tensor of the fuel, spring constants, and damper coefficients. Refinement and understanding the effects of these parameters allow for a more accurate simulation of fuel slosh. The current research will focus on developing models of different complexity and estimating the model parameters that will ultimately provide a more realistic prediction of Nutation Time Constant obtained through simulation.

  11. Improved diagnostic model for estimating wind energy

    Energy Technology Data Exchange (ETDEWEB)

    Endlich, R.M.; Lee, J.D.

    1983-03-01

    Because wind data are available only at scattered locations, a quantitative method is needed to estimate the wind resource at specific sites where wind energy generation may be economically feasible. This report describes a computer model that makes such estimates. The model uses standard weather reports and terrain heights in deriving wind estimates; the method of computation has been changed from what has been used previously. The performance of the current model is compared with that of the earlier version at three sites; estimates of wind energy at four new sites are also presented.

  12. On parameter estimation in deformable models

    DEFF Research Database (Denmark)

    Fisker, Rune; Carstensen, Jens Michael

    1998-01-01

    Deformable templates have been intensively studied in image analysis through the last decade, but despite its significance the estimation of model parameters has received little attention. We present a method for supervised and unsupervised model parameter estimation using a general Bayesian form...

  13. Modeling and estimating system availability

    International Nuclear Information System (INIS)

    Gaver, D.P.; Chu, B.B.

    1976-11-01

    Mathematical models to infer the availability of various types of more or less complicated systems are described. The analyses presented are probabilistic in nature and consist of three parts: a presentation of various analytic models for availability; a means of deriving approximate probability limits on system availability; and a means of statistical inference of system availability from sparse data, using a jackknife procedure. Various low-order redundant systems are used as examples, but extension to more complex systems is not difficult

  14. Application of a simplified mathematical model to estimate the effect of forced aeration on composting in a closed system.

    Science.gov (United States)

    Bari, Quazi H; Koenig, Albert

    2012-11-01

    The aeration rate is a key process control parameter in the forced aeration composting process because it greatly affects different physico-chemical parameters such as temperature and moisture content, and indirectly influences the biological degradation rate. In this study, the effect of a constant airflow rate on vertical temperature distribution and organic waste degradation in the composting mass is analyzed using a previously developed mathematical model of the composting process. The model was applied to analyze the effect of two different ambient conditions, namely, hot and cold ambient condition, and four different airflow rates such as 1.5, 3.0, 4.5, and 6.0 m(3) m(-2) h(-1), respectively, on the temperature distribution and organic waste degradation in a given waste mixture. The typical waste mixture had 59% moisture content and 96% volatile solids, however, the proportion could be varied as required. The results suggested that the model could be efficiently used to analyze composting under variable ambient and operating conditions. A lower airflow rate around 1.5-3.0 m(3) m(-2) h(-1) was found to be suitable for cold ambient condition while a higher airflow rate around 4.5-6.0 m(3) m(-2) h(-1) was preferable for hot ambient condition. The engineered way of application of this model is flexible which allows the changes in any input parameters within the realistic range. It can be widely used for conceptual process design, studies on the effect of ambient conditions, optimization studies in existing composting plants, and process control. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. Estimators for longitudinal latent exposure models: examining measurement model assumptions.

    Science.gov (United States)

    Sánchez, Brisa N; Kim, Sehee; Sammel, Mary D

    2017-06-15

    Latent variable (LV) models are increasingly being used in environmental epidemiology as a way to summarize multiple environmental exposures and thus minimize statistical concerns that arise in multiple regression. LV models may be especially useful when multivariate exposures are collected repeatedly over time. LV models can accommodate a variety of assumptions but, at the same time, present the user with many choices for model specification particularly in the case of exposure data collected repeatedly over time. For instance, the user could assume conditional independence of observed exposure biomarkers given the latent exposure and, in the case of longitudinal latent exposure variables, time invariance of the measurement model. Choosing which assumptions to relax is not always straightforward. We were motivated by a study of prenatal lead exposure and mental development, where assumptions of the measurement model for the time-changing longitudinal exposure have appreciable impact on (maximum-likelihood) inferences about the health effects of lead exposure. Although we were not particularly interested in characterizing the change of the LV itself, imposing a longitudinal LV structure on the repeated multivariate exposure measures could result in high efficiency gains for the exposure-disease association. We examine the biases of maximum likelihood estimators when assumptions about the measurement model for the longitudinal latent exposure variable are violated. We adapt existing instrumental variable estimators to the case of longitudinal exposures and propose them as an alternative to estimate the health effects of a time-changing latent predictor. We show that instrumental variable estimators remain unbiased for a wide range of data generating models and have advantages in terms of mean squared error. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  16. A continuous-time adaptive particle filter for estimations under measurement time uncertainties with an application to a plasma-leucine mixed effects model.

    Science.gov (United States)

    Krengel, Annette; Hauth, Jan; Taskinen, Marja-Riitta; Adiels, Martin; Jirstrand, Mats

    2013-01-19

    When mathematical modelling is applied to many different application areas, a common task is the estimation of states and parameters based on measurements. With this kind of inference making, uncertainties in the time when the measurements have been taken are often neglected, but especially in applications taken from the life sciences, this kind of errors can considerably influence the estimation results. As an example in the context of personalized medicine, the model-based assessment of the effectiveness of drugs is becoming to play an important role. Systems biology may help here by providing good pharmacokinetic and pharmacodynamic (PK/PD) models. Inference on these systems based on data gained from clinical studies with several patient groups becomes a major challenge. Particle filters are a promising approach to tackle these difficulties but are by itself not ready to handle uncertainties in measurement times. In this article, we describe a variant of the standard particle filter (PF) algorithm which allows state and parameter estimation with the inclusion of measurement time uncertainties (MTU). The modified particle filter, which we call MTU-PF, also allows the application of an adaptive stepsize choice in the time-continuous case to avoid degeneracy problems. The modification is based on the model assumption of uncertain measurement times. While the assumption of randomness in the measurements themselves is common, the corresponding measurement times are generally taken as deterministic and exactly known. Especially in cases where the data are gained from measurements on blood or tissue samples, a relatively high uncertainty in the true measurement time seems to be a natural assumption. Our method is appropriate in cases where relatively few data are used from a relatively large number of groups or individuals, which introduce mixed effects in the model. This is a typical setting of clinical studies. We demonstrate the method on a small artificial example

  17. The use of a diffuse interface model to estimate effective transport properties for two-phase flows in porous media

    International Nuclear Information System (INIS)

    Fichot, Floriana; Duval, Fabiena; Garcia, Aureliena; Belloni, Julien; Quintard, Michel

    2005-01-01

    Full text of publication follows: In the framework of its research programme on severe nuclear reactor accidents, IRSN investigates the water flooding of an overheated porous bed, where complex two-phase flows are likely to exist. The goal is to describe the flow with a general model, covering rods and debris beds regions in the vessel. A better understanding of the flow at the pore level appears to be necessary in order to justify and improve closure laws of macroscopic models. Although the Direct Numerical Simulation (DNS) of two-phase flows is possible with several methods, applications are now limited to small computational domains, typically of the order of a few centimeters. Therefore, numerical solutions at the reactor scale can only be obtained by using averaged models. Volume averaging is the most traditional way of deriving such models. For nuclear safety codes, a control volume must include a few rods or a few debris particles, with a characteristic dimension of a few centimeters. The difficulty usually met with averaged models is the closure of several transport or source terms which appear in the averaged conservation equations (for example the interfacial drag or the heat transfers between phases) [2]. In the past, the closure of these terms was obtained, when possible, from one-dimensional experiments that allowed measurements of heat flux or pressure drops. For more complex flows, the experimental measurement of local parameters is often impossible and the effective properties cannot be determined easily. An alternative way is to perform 'numerical experiments' with numerical simulations of the local flow. As mentioned above, the domain of application of DNS corresponds to the size of control volumes necessary to derive averaged models. Therefore DNS appears as a powerful tool to investigate the local features of a two-phase flow in complex geometries. Diffuse interface methods provide a way to model flows with interfacial phenomena through an

  18. Robust estimates of environmental effects on population vital rates: an integrated capture–recapture model of seasonal brook trout growth, survival and movement in a stream network

    Science.gov (United States)

    Letcher, Benjamin H.; Schueller, Paul; Bassar, Ronald D.; Nislow, Keith H.; Coombs, Jason A.; Sakrejda, Krzysztof; Morrissey, Michael; Sigourney, Douglas B.; Whiteley, Andrew R.; O'Donnell, Matthew J.; Dubreuil, Todd L.

    2015-01-01

    Modelling the effects of environmental change on populations is a key challenge for ecologists, particularly as the pace of change increases. Currently, modelling efforts are limited by difficulties in establishing robust relationships between environmental drivers and population responses.We developed an integrated capture–recapture state-space model to estimate the effects of two key environmental drivers (stream flow and temperature) on demographic rates (body growth, movement and survival) using a long-term (11 years), high-resolution (individually tagged, sampled seasonally) data set of brook trout (Salvelinus fontinalis) from four sites in a stream network. Our integrated model provides an effective context within which to estimate environmental driver effects because it takes full advantage of data by estimating (latent) state values for missing observations, because it propagates uncertainty among model components and because it accounts for the major demographic rates and interactions that contribute to annual survival.We found that stream flow and temperature had strong effects on brook trout demography. Some effects, such as reduction in survival associated with low stream flow and high temperature during the summer season, were consistent across sites and age classes, suggesting that they may serve as robust indicators of vulnerability to environmental change. Other survival effects varied across ages, sites and seasons, indicating that flow and temperature may not be the primary drivers of survival in those cases. Flow and temperature also affected body growth rates; these responses were consistent across sites but differed dramatically between age classes and seasons. Finally, we found that tributary and mainstem sites responded differently to variation in flow and temperature.Annual survival (combination of survival and body growth across seasons) was insensitive to body growth and was most sensitive to flow (positive) and temperature (negative

  19. Estimating the effects of maternal education on child dental caries using marginal structural models: The Longitudinal Study of Indigenous Australian Children.

    Science.gov (United States)

    Ju, Xiangqun; Jamieson, Lisa M; Mejia, Gloria C

    2016-12-01

    To estimate the effect of mothers' education on Indigenous Australian children's dental caries experience while controlling for the mediating effect of children's sweet food intake. The Longitudinal Study of Indigenous Children is a study of two representative cohorts of Indigenous Australian children, aged from 6 months to 2 years (baby cohort) and from 3.5 to 5 years (child cohort) at baseline. The children's primary caregiver undertook a face-to-face interview in 2008 and repeated annually for the next 4 years. Data included household demographics, child health (nutrition information and dental health), maternal conditions and highest qualification levels. Mother's educational level was classified into four categories: 0-9 years, 10 years, 11-12 years and >12 years. Children's mean sweet food intake was categorized as 30%. After multiple imputation of missing values, a marginal structural model with stabilized inverse probability weights was used to estimate the direct effect of mothers' education level on children's dental decay experience. From 2008 to 2012, complete data on 1720 mother-child dyads were available. Dental caries experience for children was 42.3% over the 5-year period. The controlled direct effect estimates of mother's education on child dental caries were 1.21 (95% CI: 1.01-1.45), 1.03 (95% CI: 0.91-1.18) and 1.07 (95% CI: 0.93-1.22); after multiple imputation of missing values, the effects were 1.21 (95% CI: 1.05-1.39), 1.06 (95% CI: 0.94-1.19) and 1.06 (95% CI: 0.95-1.19), comparing '0-9', '10' and '11-12' years to > 12 years of education. Mothers' education level had a direct effect on children's dental decay experience that was not mediated by sweet food intake and other risk factors when estimated using a marginal structural model. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  20. The impact and cost-effectiveness of nonavalent HPV vaccination in the United States: Estimates from a simplified transmission model

    OpenAIRE

    Chesson, Harrell W.; Markowitz, Lauri E.; Hariri, Susan; Ekwueme, Donatus U.; Saraiya, Mona

    2016-01-01

    Introduction: The objective of this study was to assess the incremental costs and benefits of the 9-valent HPV vaccine (9vHPV) compared with the quadrivalent HPV vaccine (4vHPV). Like 4vHPV, 9vHPV protects against HPV types 6, 11, 16, and 18. 9vHPV also protects against 5 additional HPV types 31, 33, 45, 52, and 58. Methods: We adapted a previously published model of the impact and cost-effectiveness of 4vHPV to include the 5 additional HPV types in 9vHPV. The vaccine strategies we examined w...

  1. Estimate the time varying brain receptor occupancy in PET imaging experiments using non-linear fixed and mixed effect modeling approach

    International Nuclear Information System (INIS)

    Zamuner, Stefano; Gomeni, Roberto; Bye, Alan

    2002-01-01

    Positron-Emission Tomography (PET) is an imaging technology currently used in drug development as a non-invasive measure of drug distribution and interaction with biochemical target system. The level of receptor occupancy achieved by a compound can be estimated by comparing time-activity measurements in an experiment done using tracer alone with the activity measured when the tracer is given following administration of unlabelled compound. The effective use of this surrogate marker as an enabling tool for drug development requires the definition of a model linking the brain receptor occupancy with the fluctuation of plasma concentrations. However, the predictive performance of such a model is strongly related to the precision on the estimate of receptor occupancy evaluated in PET scans collected at different times following drug treatment. Several methods have been proposed for the analysis and the quantification of the ligand-receptor interactions investigated from PET data. The aim of the present study is to evaluate alternative parameter estimation strategies based on the use of non-linear mixed effect models allowing to account for intra and inter-subject variability on the time-activity and for covariates potentially explaining this variability. A comparison of the different modeling approaches is presented using real data. The results of this comparison indicates that the mixed effect approach with a primary model partitioning the variance in term of Inter-Individual Variability (IIV) and Inter-Occasion Variability (IOV) and a second stage model relating the changes on binding potential to the dose of unlabelled drug is definitely the preferred approach

  2. Examining effective use of data sources and modeling algorithms for improving biomass estimation in a moist tropical forest of the Brazilian Amazon

    Science.gov (United States)

    Yunyun Feng; Dengsheng Lu; Qi Chen; Michael Keller; Emilio Moran; Maiza Nara dos-Santos; Edson Luis Bolfe; Mateus Batistella

    2017-01-01

    Previous research has explored the potential to integrate lidar and optical data in aboveground biomass (AGB) estimation, but how different data sources, vegetation types, and modeling algorithms influence AGB estimation is poorly understood. This research conducts a comparative analysis of different data sources and modeling approaches in improving AGB estimation....

  3. The Use of Mixed Effects Models for Obtaining Low-Cost Ecosystem Carbon Stock Estimates in Mangroves of the Asia-Pacific

    Science.gov (United States)

    Bukoski, J. J.; Broadhead, J. S.; Donato, D.; Murdiyarso, D.; Gregoire, T. G.

    2016-12-01

    Mangroves provide extensive ecosystem services that support both local livelihoods and international environmental goals, including coastal protection, water filtration, biodiversity conservation and the sequestration of carbon (C). While voluntary C market projects that seek to preserve and enhance forest C stocks offer a potential means of generating finance for mangrove conservation, their implementation faces barriers due to the high costs of quantifying C stocks through measurement, reporting and verification (MRV) activities. To streamline MRV activities in mangrove C forestry projects, we develop predictive models for (i) biomass-based C stocks, and (ii) soil-based C stocks for the mangroves of the Asia-Pacific. We use linear mixed effect models to account for spatial correlation in modeling the expected C as a function of stand attributes. The most parsimonious biomass model predicts total biomass C stocks as a function of both basal area and the interaction between latitude and basal area, whereas the most parsimonious soil C model predicts soil C stocks as a function of the logarithmic transformations of both latitude and basal area. Random effects are specified by site for both models, and are found to explain a substantial proportion of variance within the estimation datasets. The root mean square error (RMSE) of the biomass C model is approximated at 24.6 Mg/ha (18.4% of mean biomass C in the dataset), whereas the RMSE of the soil C model is estimated at 4.9 mg C/cm 3 (14.1% of mean soil C). A substantial proportion of the variation in soil C, however, is explained by the random effects and thus the use of the SOC model may be most valuable for sites in which field measurements of soil C exist.

  4. Parameter Estimation of Partial Differential Equation Models.

    Science.gov (United States)

    Xun, Xiaolei; Cao, Jiguo; Mallick, Bani; Carroll, Raymond J; Maity, Arnab

    2013-01-01

    Partial differential equation (PDE) models are commonly used to model complex dynamic systems in applied sciences such as biology and finance. The forms of these PDE models are usually proposed by experts based on their prior knowledge and understanding of the dynamic system. Parameters in PDE models often have interesting scientific interpretations, but their values are often unknown, and need to be estimated from the measurements of the dynamic system in the present of measurement errors. Most PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus the computational load is high. In this article, we propose two methods to estimate parameters in PDE models: a parameter cascading method and a Bayesian approach. In both methods, the underlying dynamic process modeled with the PDE model is represented via basis function expansion. For the parameter cascading method, we develop two nested levels of optimization to estimate the PDE parameters. For the Bayesian method, we develop a joint model for data and the PDE, and develop a novel hierarchical model allowing us to employ Markov chain Monte Carlo (MCMC) techniques to make posterior inference. Simulation studies show that the Bayesian method and parameter cascading method are comparable, and both outperform other available methods in terms of estimation accuracy. The two methods are demonstrated by estimating parameters in a PDE model from LIDAR data.

  5. A nonparametric mixture model for cure rate estimation.

    Science.gov (United States)

    Peng, Y; Dear, K B

    2000-03-01

    Nonparametric methods have attracted less attention than their parametric counterparts for cure rate analysis. In this paper, we study a general nonparametric mixture model. The proportional hazards assumption is employed in modeling the effect of covariates on the failure time of patients who are not cured. The EM algorithm, the marginal likelihood approach, and multiple imputations are employed to estimate parameters of interest in the model. This model extends models and improves estimation methods proposed by other researchers. It also extends Cox's proportional hazards regression model by allowing a proportion of event-free patients and investigating covariate effects on that proportion. The model and its estimation method are investigated by simulations. An application to breast cancer data, including comparisons with previous analyses using a parametric model and an existing nonparametric model by other researchers, confirms the conclusions from the parametric model but not those from the existing nonparametric model.

  6. Constraining the Influence of Natural Variability to Improve Estimates of Global Aerosol Indirect Effects in a Nudged Version of the Community Atmosphere Model 5

    Energy Technology Data Exchange (ETDEWEB)

    Kooperman, G. J.; Pritchard, M. S.; Ghan, Steven J.; Wang, Minghuai; Somerville, Richard C.; Russell, Lynn

    2012-12-11

    Natural modes of variability on many timescales influence aerosol particle distributions and cloud properties such that isolating statistically significant differences in cloud radiative forcing due to anthropogenic aerosol perturbations (indirect effects) typically requires integrating over long simulations. For state-of-the-art global climate models (GCM), especially those in which embedded cloud-resolving models replace conventional statistical parameterizations (i.e. multi-scale modeling framework, MMF), the required long integrations can be prohibitively expensive. Here an alternative approach is explored, which implements Newtonian relaxation (nudging) to constrain simulations with both pre-industrial and present-day aerosol emissions toward identical meteorological conditions, thus reducing differences in natural variability and dampening feedback responses in order to isolate radiative forcing. Ten-year GCM simulations with nudging provide a more stable estimate of the global-annual mean aerosol indirect radiative forcing than do conventional free-running simulations. The estimates have mean values and 95% confidence intervals of -1.54 ± 0.02 W/m2 and -1.63 ± 0.17 W/m2 for nudged and free-running simulations, respectively. Nudging also substantially increases the fraction of the world’s area in which a statistically significant aerosol indirect effect can be detected (68% and 25% of the Earth's surface for nudged and free-running simulations, respectively). One-year MMF simulations with and without nudging provide global-annual mean aerosol indirect radiative forcing estimates of -0.80 W/m2 and -0.56 W/m2, respectively. The one-year nudged results compare well with previous estimates from three-year free-running simulations (-0.77 W/m2), which showed the aerosol-cloud relationship to be in better agreement with observations and high-resolution models than in the results obtained with conventional parameterizations.

  7. Robust estimation of hydrological model parameters

    Directory of Open Access Journals (Sweden)

    A. Bárdossy

    2008-11-01

    Full Text Available The estimation of hydrological model parameters is a challenging task. With increasing capacity of computational power several complex optimization algorithms have emerged, but none of the algorithms gives a unique and very best parameter vector. The parameters of fitted hydrological models depend upon the input data. The quality of input data cannot be assured as there may be measurement errors for both input and state variables. In this study a methodology has been developed to find a set of robust parameter vectors for a hydrological model. To see the effect of observational error on parameters, stochastically generated synthetic measurement errors were applied to observed discharge and temperature data. With this modified data, the model was calibrated and the effect of measurement errors on parameters was analysed. It was found that the measurement errors have a significant effect on the best performing parameter vector. The erroneous data led to very different optimal parameter vectors. To overcome this problem and to find a set of robust parameter vectors, a geometrical approach based on Tukey's half space depth was used. The depth of the set of N randomly generated parameters was calculated with respect to the set with the best model performance (Nash-Sutclife efficiency was used for this study for each parameter vector. Based on the depth of parameter vectors, one can find a set of robust parameter vectors. The results show that the parameters chosen according to the above criteria have low sensitivity and perform well when transfered to a different time period. The method is demonstrated on the upper Neckar catchment in Germany. The conceptual HBV model was used for this study.

  8. Estimation of radiative effect of a heavy dust storm over northwest China using Fu–Liou model and ground measurements

    International Nuclear Information System (INIS)

    Wang, Wencai; Huang, Jianping; Zhou, Tian; Bi, Jianrong; Lin, Lei; Chen, Yonghang; Huang, Zhongwei; Su, Jing

    2013-01-01

    A heavy dust storm that occurred in Northwestern China during April 24–30 2010 was studied using observational data along with the Fu–Liou radiative transfer model. The dust storm was originated from Mongolia and affected more than 10 provinces of China. Our results showed that dust aerosols have a significant impact on the radiative energy budget. At Minqin (102.959°E, 38.607°N) and Semi-Arid Climate and Environment Observatory of Lanzhou University (SACOL, 104.13°E, 35.95°N) sites, the net radiative forcing (RF) ranged from 5.93 to 35.7 W m −2 at the top of the atmosphere (TOA), −6.3 to −30.94 W m −2 at surface, and 16.77 to 56.32 W m −2 in the atmosphere. The maximum net radiative heating rate reached 5.89 K at 1.5 km on 24 April at the Minqin station and 4.46 K at 2.2 km on 29 April at the SACOL station. Our results also indicated that the radiative effect of dust aerosols is affected by aerosol optical depth (AOD), single-scattering albedo (SSA) and surface albedo. Modifications of the radiative energy budget by dust aerosols may have important implications for atmospheric circulation and regional climate. -- Highlights: ► Dust aerosols' optical properties and radiative effects were investigated. ► We have surface observations on Minqin and SACOL where heavy dust storm occurred. ► Accurate input parameters for model were acquired from ground-based measurements. ► Aerosol's optical properties may have changed when transported

  9. Maximum likelihood estimation of finite mixture model for economic data

    Science.gov (United States)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-06-01

    Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.

  10. Estimation of effects of photosynthesis response functions on rice yields and seasonal variation of CO2 fixation using a photosynthesis-sterility type of crop yield model

    International Nuclear Information System (INIS)

    Kaneko, D.; Moriwaki, Y.

    2008-01-01

    This study presents a crop production model improvement: the previously adopted Michaelis-Menten (MM) type photosynthesis response function (fsub(rad-MM)) was replaced with a Prioul-Chartier (PC) type function (fsub(rad-PC)). The authors' analysis reflects concerns regarding the background effect of global warming, under simultaneous conditions of high air temperature and strong solar radiation. The MM type function fsub(rad-MM) can give excessive values leading to an overestimate of photosynthesis rate (PSN) and grain yield for paddy-rice. The MM model is applicable to many plants whose (PSN) increases concomitant with increased insolation: wheat, maize, soybean, etc. For paddy rice, the PSN apparently shows a maximum PSN. This paper proves that the MM model overestimated the PSN for paddy rice for sufficient solar radiation: the PSN using the PC model yields 10% lower values. However, the unit crop production index (CPIsub(U)) is almost independent of the MM and PC models because of respective standardization of both PSN and crop production index using average PSNsub(0) and CPIsub(0). The authors improved the estimation method using a photosynthesis-and-sterility based crop situation index (CSIsub(E)) to produce a crop yield index (CYIsub(E)), which is used to estimate rice yields in place of the crop situation index (CSI); the CSI gives a percentage of rice yields compared to normal annual production. The model calculates PSN including biomass effects, low-temperature sterility, and high-temperature injury by incorporating insolation, effective air temperature, the normalized difference vegetation index (NDVI), and effects of temperature on photosynthesis. Based on routine observation data, the method enables automated crop-production monitoring in remote regions without special observations. This method can quantify grain production early to raise an alarm in Southeast Asian countries, which must confront climate fluctuation through this era of global

  11. NEW MODEL FOR SOLAR RADIATION ESTIMATION FROM ...

    African Journals Online (AJOL)

    NEW MODEL FOR SOLAR RADIATION ESTIMATION FROM MEASURED AIR TEMPERATURE AND ... Nigerian Journal of Technology ... Solar radiation measurement is not sufficient in Nigeria for various reasons such as maintenance and ...

  12. Efficient Estimation of Non-Linear Dynamic Panel Data Models with Application to Smooth Transition Models

    DEFF Research Database (Denmark)

    Gørgens, Tue; Skeels, Christopher L.; Wurtz, Allan

    This paper explores estimation of a class of non-linear dynamic panel data models with additive unobserved individual-specific effects. The models are specified by moment restrictions. The class includes the panel data AR(p) model and panel smooth transition models. We derive an efficient set...... of moment restrictions for estimation and apply the results to estimation of panel smooth transition models with fixed effects, where the transition may be determined endogenously. The performance of the GMM estimator, both in terms of estimation precision and forecasting performance, is examined in a Monte...

  13. Parameter Estimation of Nonlinear Models in Forestry.

    OpenAIRE

    Fekedulegn, Desta; Mac Siúrtáin, Máirtín Pádraig; Colbert, Jim J.

    1999-01-01

    Partial derivatives of the negative exponential, monomolecular, Mitcherlich, Gompertz, logistic, Chapman-Richards, von Bertalanffy, Weibull and the Richard’s nonlinear growth models are presented. The application of these partial derivatives in estimating the model parameters is illustrated. The parameters are estimated using the Marquardt iterative method of nonlinear regression relating top height to age of Norway spruce (Picea abies L.) from the Bowmont Norway Spruce Thinnin...

  14. An Estimation of Construction and Demolition Debris in Seoul, Korea: Waste Amount, Type, and Estimating Model.

    Science.gov (United States)

    Seo, Seongwon; Hwang, Yongwoo

    1999-08-01

    Construction and demolition (C&D) debris is generated at the site of various construction activities. However, the amount of the debris is usually so large that it is necessary to estimate the amount of C&D debris as accurately as possible for effective waste management and control in urban areas. In this paper, an effective estimation method using a statistical model was proposed. The estimation process was composed of five steps: estimation of the life span of buildings; estimation of the floor area of buildings to be constructed and demolished; calculation of individual intensity units of C&D debris; and estimation of the future C&D debris production. This method was also applied in the city of Seoul as an actual case, and the estimated amount of C&D debris in Seoul in 2021 was approximately 24 million tons. Of this total amount, 98% was generated by demolition, and the main components of debris were concrete and brick.

  15. Efficiently adapting graphical models for selectivity estimation

    DEFF Research Database (Denmark)

    Tzoumas, Kostas; Deshpande, Amol; Jensen, Christian S.

    2013-01-01

    cardinality estimation without making the independence assumption. By carefully using concepts from the field of graphical models, we are able to factor the joint probability distribution over all the attributes in the database into small, usually two-dimensional distributions, without a significant loss...... in estimation accuracy. We show how to efficiently construct such a graphical model from the database using only two-way join queries, and we show how to perform selectivity estimation in a highly efficient manner. We integrate our algorithms into the PostgreSQL DBMS. Experimental results indicate...

  16. Semi-parametric estimation for ARCH models

    Directory of Open Access Journals (Sweden)

    Raed Alzghool

    2018-03-01

    Full Text Available In this paper, we conduct semi-parametric estimation for autoregressive conditional heteroscedasticity (ARCH model with Quasi likelihood (QL and Asymptotic Quasi-likelihood (AQL estimation methods. The QL approach relaxes the distributional assumptions of ARCH processes. The AQL technique is obtained from the QL method when the process conditional variance is unknown. We present an application of the methods to a daily exchange rate series. Keywords: ARCH model, Quasi likelihood (QL, Asymptotic Quasi-likelihood (AQL, Martingale difference, Kernel estimator

  17. Modeling a farm population to estimate on-farm compliance costs and environmental effects of a grassland extensification scheme at the regional scale

    DEFF Research Database (Denmark)

    Uthes, Sandra; Sattler, Claudia; Piorr, Annette

    2010-01-01

    We used a farm-level modeling approach to estimate on-farm compliance costs and environmental effects of a grassland extensification scheme in the district of Ostprignitz-Ruppin, Germany. The behavior of the regional farm population (n = 585) consisting of different farm types with different...... and environmental effects were heterogeneous in space and farm types as a result of different agricultural production and site characteristics. On-farm costs ranged from zero up to almost 1500 Euro/ha. Such high costs occurred only in a very small part of the regional area, whereas the majority of the grassland had...... low on-farm costs below 50 Euro/ha. Environmental effects were moderate and greater on high-yield than on low-yield grassland. The low effectiveness combined with low on-farm costs in large parts of the region indicates that the scheme is not well targeted. The soft scheme design results from...

  18. Parameter Estimation of Partial Differential Equation Models

    KAUST Repository

    Xun, Xiaolei

    2013-09-01

    Partial differential equation (PDE) models are commonly used to model complex dynamic systems in applied sciences such as biology and finance. The forms of these PDE models are usually proposed by experts based on their prior knowledge and understanding of the dynamic system. Parameters in PDE models often have interesting scientific interpretations, but their values are often unknown and need to be estimated from the measurements of the dynamic system in the presence of measurement errors. Most PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus the computational load is high. In this article, we propose two methods to estimate parameters in PDE models: a parameter cascading method and a Bayesian approach. In both methods, the underlying dynamic process modeled with the PDE model is represented via basis function expansion. For the parameter cascading method, we develop two nested levels of optimization to estimate the PDE parameters. For the Bayesian method, we develop a joint model for data and the PDE and develop a novel hierarchical model allowing us to employ Markov chain Monte Carlo (MCMC) techniques to make posterior inference. Simulation studies show that the Bayesian method and parameter cascading method are comparable, and both outperform other available methods in terms of estimation accuracy. The two methods are demonstrated by estimating parameters in a PDE model from long-range infrared light detection and ranging data. Supplementary materials for this article are available online. © 2013 American Statistical Association.

  19. A Dynamic Travel Time Estimation Model Based on Connected Vehicles

    Directory of Open Access Journals (Sweden)

    Daxin Tian

    2015-01-01

    Full Text Available With advances in connected vehicle technology, dynamic vehicle route guidance models gradually become indispensable equipment for drivers. Traditional route guidance models are designed to direct a vehicle along the shortest path from the origin to the destination without considering the dynamic traffic information. In this paper a dynamic travel time estimation model is presented which can collect and distribute traffic data based on the connected vehicles. To estimate the real-time travel time more accurately, a road link dynamic dividing algorithm is proposed. The efficiency of the model is confirmed by simulations, and the experiment results prove the effectiveness of the travel time estimation method.

  20. The impact and cost-effectiveness of nonavalent HPV vaccination in the United States: Estimates from a simplified transmission model.

    Science.gov (United States)

    Chesson, Harrell W; Markowitz, Lauri E; Hariri, Susan; Ekwueme, Donatus U; Saraiya, Mona

    2016-06-02

    The objective of this study was to assess the incremental costs and benefits of the 9-valent HPV vaccine (9vHPV) compared with the quadrivalent HPV vaccine (4vHPV). Like 4vHPV, 9vHPV protects against HPV types 6, 11, 16, and 18. 9vHPV also protects against 5 additional HPV types 31, 33, 45, 52, and 58. We adapted a previously published model of the impact and cost-effectiveness of 4vHPV to include the 5 additional HPV types in 9vHPV. The vaccine strategies we examined were (1) 4vHPV for males and females; (2) 9vHPV for females and 4vHPV for males; and (3) 9vHPV for males and females. In the base case, 9vHPV cost $13 more per dose than 4vHPV, based on available vaccine price information. Providing 9vHPV to females compared with 4vHPV for females (assuming 4vHPV for males in both scenarios) was cost-saving regardless of whether or not cross-protection for 4vHPV was assumed. The cost per quality-adjusted life year (QALY) gained by 9vHPV for both sexes (compared with 4vHPV for both sexes) was < $0 (cost-saving) when assuming no cross-protection for 4vHPV and $8,600 when assuming cross-protection for 4vHPV. Compared with a vaccination program of 4vHPV for both sexes, a vaccination program of 9vHPV for both sexes can improve health outcomes and can be cost-saving.

  1. Using marginal structural measurement-error models to estimate the long-term effect of antiretroviral therapy on incident AIDS or death.

    Science.gov (United States)

    Cole, Stephen R; Jacobson, Lisa P; Tien, Phyllis C; Kingsley, Lawrence; Chmiel, Joan S; Anastos, Kathryn

    2010-01-01

    To estimate the net effect of imperfectly measured highly active antiretroviral therapy on incident acquired immunodeficiency syndrome or death, the authors combined inverse probability-of-treatment-and-censoring weighted estimation of a marginal structural Cox model with regression-calibration methods. Between 1995 and 2007, 950 human immunodeficiency virus-positive men and women were followed in 2 US cohort studies. During 4,054 person-years, 374 initiated highly active antiretroviral therapy, 211 developed acquired immunodeficiency syndrome or died, and 173 dropped out. Accounting for measured confounders and determinants of dropout, the weighted hazard ratio for acquired immunodeficiency syndrome or death comparing use of highly active antiretroviral therapy in the prior 2 years with no therapy was 0.36 (95% confidence limits: 0.21, 0.61). This association was relatively constant over follow-up (P = 0.19) and stronger than crude or adjusted hazard ratios of 0.75 and 0.95, respectively. Accounting for measurement error in reported exposure using external validation data on 331 men and women provided a hazard ratio of 0.17, with bias shifted from the hazard ratio to the estimate of precision as seen by the 2.5-fold wider confidence limits (95% confidence limits: 0.06, 0.43). Marginal structural measurement-error models can simultaneously account for 3 major sources of bias in epidemiologic research: validated exposure measurement error, measured selection bias, and measured time-fixed and time-varying confounding.

  2. Counterfactual simulations applied to SHRP2 crashes: The effect of driver behavior models on safety benefit estimations of intelligent safety systems.

    Science.gov (United States)

    Bärgman, Jonas; Boda, Christian-Nils; Dozza, Marco

    2017-05-01

    As the development and deployment of in-vehicle intelligent safety systems (ISS) for crash avoidance and mitigation have rapidly increased in the last decades, the need to evaluate their prospective safety benefits before introduction has never been higher. Counterfactual simulations using relevant mathematical models (for vehicle dynamics, sensors, the environment, ISS algorithms, and models of driver behavior) have been identified as having high potential. However, although most of these models are relatively mature, models of driver behavior in the critical seconds before a crash are still relatively immature. There are also large conceptual differences between different driver models. The objective of this paper is, firstly, to demonstrate the importance of the choice of driver model when counterfactual simulations are used to evaluate two ISS: Forward collision warning (FCW), and autonomous emergency braking (AEB). Secondly, the paper demonstrates how counterfactual simulations can be used to perform sensitivity analyses on parameter settings, both for driver behavior and ISS algorithms. Finally, the paper evaluates the effect of the choice of glance distribution in the driver behavior model on the safety benefit estimation. The paper uses pre-crash kinematics and driver behavior from 34 rear-end crashes from the SHRP2 naturalistic driving study for the demonstrations. The results for FCW show a large difference in the percent of avoided crashes between conceptually different models of driver behavior, while differences were small for conceptually similar models. As expected, the choice of model of driver behavior did not affect AEB benefit much. Based on our results, researchers and others who aim to evaluate ISS with the driver in the loop through counterfactual simulations should be sure to make deliberate and well-grounded choices of driver models: the choice of model matters. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Conditional shape models for cardiac motion estimation

    DEFF Research Database (Denmark)

    Metz, Coert; Baka, Nora; Kirisli, Hortense

    2010-01-01

    We propose a conditional statistical shape model to predict patient specific cardiac motion from the 3D end-diastolic CTA scan. The model is built from 4D CTA sequences by combining atlas based segmentation and 4D registration. Cardiac motion estimation is, for example, relevant in the dynamic...

  4. FUZZY MODELING BY SUCCESSIVE ESTIMATION OF RULES ...

    African Journals Online (AJOL)

    This paper presents an algorithm for automatically deriving fuzzy rules directly from a set of input-output data of a process for the purpose of modeling. The rules are extracted by a method termed successive estimation. This method is used to generate a model without truncating the number of fired rules, to within user ...

  5. Robust estimation for ordinary differential equation models.

    Science.gov (United States)

    Cao, J; Wang, L; Xu, J

    2011-12-01

    Applied scientists often like to use ordinary differential equations (ODEs) to model complex dynamic processes that arise in biology, engineering, medicine, and many other areas. It is interesting but challenging to estimate ODE parameters from noisy data, especially when the data have some outliers. We propose a robust method to address this problem. The dynamic process is represented with a nonparametric function, which is a linear combination of basis functions. The nonparametric function is estimated by a robust penalized smoothing method. The penalty term is defined with the parametric ODE model, which controls the roughness of the nonparametric function and maintains the fidelity of the nonparametric function to the ODE model. The basis coefficients and ODE parameters are estimated in two nested levels of optimization. The coefficient estimates are treated as an implicit function of ODE parameters, which enables one to derive the analytic gradients for optimization using the implicit function theorem. Simulation studies show that the robust method gives satisfactory estimates for the ODE parameters from noisy data with outliers. The robust method is demonstrated by estimating a predator-prey ODE model from real ecological data. © 2011, The International Biometric Society.

  6. Statistical Model-Based Face Pose Estimation

    Institute of Scientific and Technical Information of China (English)

    GE Xinliang; YANG Jie; LI Feng; WANG Huahua

    2007-01-01

    A robust face pose estimation approach is proposed by using face shape statistical model approach and pose parameters are represented by trigonometric functions. The face shape statistical model is firstly built by analyzing the face shapes from different people under varying poses. The shape alignment is vital in the process of building the statistical model. Then, six trigonometric functions are employed to represent the face pose parameters. Lastly, the mapping function is constructed between face image and face pose by linearly relating different parameters. The proposed approach is able to estimate different face poses using a few face training samples. Experimental results are provided to demonstrate its efficiency and accuracy.

  7. Comparing fixed effects and covariance structure estimators for panel data

    DEFF Research Database (Denmark)

    Ejrnæs, Mette; Holm, Anders

    2006-01-01

    In this article, the authors compare the traditional econometric fixed effect estimator with the maximum likelihood estimator implied by covariance structure models for panel data. Their findings are that the maximum like lipoid estimator is remarkably robust to certain types of misspecifications...

  8. Online State Space Model Parameter Estimation in Synchronous Machines

    Directory of Open Access Journals (Sweden)

    Z. Gallehdari

    2014-06-01

    The suggested approach is evaluated for a sample synchronous machine model. Estimated parameters are tested for different inputs at different operating conditions. The effect of noise is also considered in this study. Simulation results show that the proposed approach provides good accuracy for parameter estimation.

  9. Direct Importance Estimation with Gaussian Mixture Models

    Science.gov (United States)

    Yamada, Makoto; Sugiyama, Masashi

    The ratio of two probability densities is called the importance and its estimation has gathered a great deal of attention these days since the importance can be used for various data processing purposes. In this paper, we propose a new importance estimation method using Gaussian mixture models (GMMs). Our method is an extention of the Kullback-Leibler importance estimation procedure (KLIEP), an importance estimation method using linear or kernel models. An advantage of GMMs is that covariance matrices can also be learned through an expectation-maximization procedure, so the proposed method — which we call the Gaussian mixture KLIEP (GM-KLIEP) — is expected to work well when the true importance function has high correlation. Through experiments, we show the validity of the proposed approach.

  10. The effect of selection on genetic parameter estimates

    African Journals Online (AJOL)

    Unknown

    The South African Journal of Animal Science is available online at ... A simulation study was carried out to investigate the effect of selection on the estimation of genetic ... The model contained a fixed effect, random genetic and random.

  11. A spatio-temporal model for estimating the long-term effects of air pollution on respiratory hospital admissions in Greater London.

    Science.gov (United States)

    Rushworth, Alastair; Lee, Duncan; Mitchell, Richard

    2014-07-01

    It has long been known that air pollution is harmful to human health, as many epidemiological studies have been conducted into its effects. Collectively, these studies have investigated both the acute and chronic effects of pollution, with the latter typically based on individual level cohort designs that can be expensive to implement. As a result of the increasing availability of small-area statistics, ecological spatio-temporal study designs are also being used, with which a key statistical problem is allowing for residual spatio-temporal autocorrelation that remains after the covariate effects have been removed. We present a new model for estimating the effects of air pollution on human health, which allows for residual spatio-temporal autocorrelation, and a study into the long-term effects of air pollution on human health in Greater London, England. The individual and joint effects of different pollutants are explored, via the use of single pollutant models and multiple pollutant indices. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  12. Thresholding projection estimators in functional linear models

    OpenAIRE

    Cardot, Hervé; Johannes, Jan

    2010-01-01

    We consider the problem of estimating the regression function in functional linear regression models by proposing a new type of projection estimators which combine dimension reduction and thresholding. The introduction of a threshold rule allows to get consistency under broad assumptions as well as minimax rates of convergence under additional regularity hypotheses. We also consider the particular case of Sobolev spaces generated by the trigonometric basis which permits to get easily mean squ...

  13. Clock error models for simulation and estimation

    International Nuclear Information System (INIS)

    Meditch, J.S.

    1981-10-01

    Mathematical models for the simulation and estimation of errors in precision oscillators used as time references in satellite navigation systems are developed. The results, based on all currently known oscillator error sources, are directly implementable on a digital computer. The simulation formulation is sufficiently flexible to allow for the inclusion or exclusion of individual error sources as desired. The estimation algorithms, following from Kalman filter theory, provide directly for the error analysis of clock errors in both filtering and prediction

  14. An integrated Bayesian model for estimating the long-term health effects of air pollution by fusing modelled and measured pollution data: A case study of nitrogen dioxide concentrations in Scotland.

    Science.gov (United States)

    Huang, Guowen; Lee, Duncan; Scott, Marian

    2015-01-01

    The long-term health effects of air pollution can be estimated using a spatio-temporal ecological study, where the disease data are counts of hospital admissions from populations in small areal units at yearly intervals. Spatially representative pollution concentrations for each areal unit are typically estimated by applying Kriging to data from a sparse monitoring network, or by computing averages over grid level concentrations from an atmospheric dispersion model. We propose a novel fusion model for estimating spatially aggregated pollution concentrations using both the modelled and monitored data, and relate these concentrations to respiratory disease in a new study in Scotland between 2007 and 2011. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  15. Estimating network effect in geocenter motion: Theory

    Science.gov (United States)

    Zannat, Umma Jamila; Tregoning, Paul

    2017-10-01

    Geophysical models and their interpretations of several processes of interest, such as sea level rise, postseismic relaxation, and glacial isostatic adjustment, are intertwined with the need to realize the International Terrestrial Reference Frame. However, this realization needs to take into account the geocenter motion, that is, the motion of the center of figure of the Earth surface, due to, for example, deformation of the surface by earthquakes or hydrological loading effects. Usually, there is also a discrepancy, known as the network effect, between the theoretically convenient center of figure and the physically accessible center of network frames, because of unavoidable factors such as uneven station distribution, lack of stations in the oceans, disparity in the coverage between the two hemispheres, and the existence of tectonically deforming zones. Here we develop a method to estimate the magnitude of the network effect, that is, the error introduced by the incomplete sampling of the Earth surface, in measuring the geocenter motion, for a network of space geodetic stations of a fixed size N. For this purpose, we use, as our proposed estimate, the standard deviations of the changes in Helmert parameters measured by a random network of the same size N. We show that our estimate scales as 1/√N and give an explicit formula for it in terms of the vector spherical harmonics expansion of the displacement field. In a complementary paper we apply this formalism to coseismic displacements and elastic deformations due to surface water movements.

  16. Estimation methods for nonlinear state-space models in ecology

    DEFF Research Database (Denmark)

    Pedersen, Martin Wæver; Berg, Casper Willestofte; Thygesen, Uffe Høgsbro

    2011-01-01

    The use of nonlinear state-space models for analyzing ecological systems is increasing. A wide range of estimation methods for such models are available to ecologists, however it is not always clear, which is the appropriate method to choose. To this end, three approaches to estimation in the theta...... logistic model for population dynamics were benchmarked by Wang (2007). Similarly, we examine and compare the estimation performance of three alternative methods using simulated data. The first approach is to partition the state-space into a finite number of states and formulate the problem as a hidden...... Markov model (HMM). The second method uses the mixed effects modeling and fast numerical integration framework of the AD Model Builder (ADMB) open-source software. The third alternative is to use the popular Bayesian framework of BUGS. The study showed that state and parameter estimation performance...

  17. Estimation and uncertainty of reversible Markov models.

    Science.gov (United States)

    Trendelkamp-Schroer, Benjamin; Wu, Hao; Paul, Fabian; Noé, Frank

    2015-11-07

    Reversibility is a key concept in Markov models and master-equation models of molecular kinetics. The analysis and interpretation of the transition matrix encoding the kinetic properties of the model rely heavily on the reversibility property. The estimation of a reversible transition matrix from simulation data is, therefore, crucial to the successful application of the previously developed theory. In this work, we discuss methods for the maximum likelihood estimation of transition matrices from finite simulation data and present a new algorithm for the estimation if reversibility with respect to a given stationary vector is desired. We also develop new methods for the Bayesian posterior inference of reversible transition matrices with and without given stationary vector taking into account the need for a suitable prior distribution preserving the meta-stable features of the observed process during posterior inference. All algorithms here are implemented in the PyEMMA software--http://pyemma.org--as of version 2.0.

  18. Parameter Estimation for Thurstone Choice Models

    Energy Technology Data Exchange (ETDEWEB)

    Vojnovic, Milan [London School of Economics (United Kingdom); Yun, Seyoung [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-04-24

    We consider the estimation accuracy of individual strength parameters of a Thurstone choice model when each input observation consists of a choice of one item from a set of two or more items (so called top-1 lists). This model accommodates the well-known choice models such as the Luce choice model for comparison sets of two or more items and the Bradley-Terry model for pair comparisons. We provide a tight characterization of the mean squared error of the maximum likelihood parameter estimator. We also provide similar characterizations for parameter estimators defined by a rank-breaking method, which amounts to deducing one or more pair comparisons from a comparison of two or more items, assuming independence of these pair comparisons, and maximizing a likelihood function derived under these assumptions. We also consider a related binary classification problem where each individual parameter takes value from a set of two possible values and the goal is to correctly classify all items within a prescribed classification error. The results of this paper shed light on how the parameter estimation accuracy depends on given Thurstone choice model and the structure of comparison sets. In particular, we found that for unbiased input comparison sets of a given cardinality, when in expectation each comparison set of given cardinality occurs the same number of times, for a broad class of Thurstone choice models, the mean squared error decreases with the cardinality of comparison sets, but only marginally according to a diminishing returns relation. On the other hand, we found that there exist Thurstone choice models for which the mean squared error of the maximum likelihood parameter estimator can decrease much faster with the cardinality of comparison sets. We report empirical evaluation of some claims and key parameters revealed by theory using both synthetic and real-world input data from some popular sport competitions and online labor platforms.

  19. Effects of halogenated aromatics/aliphatics and nitrogen(N)-heterocyclic aromatics on estimating the persistence of future pharmaceutical compounds using a modified QSAR model.

    Science.gov (United States)

    Lim, Seung Joo; Fox, Peter

    2014-02-01

    The effects of halogenated aromatics/aliphatics and nitrogen(N)-heterocyclic aromatics on estimating the persistence of future pharmaceutical compounds were investigated using a modified half life equation. The potential future pharmaceutical compounds investigated were approximately 2000 pharmaceutical drugs currently undergoing the United States Food and Drug Administration (US FDA) testing. EPI Suite (BIOWIN) model estimates the fates of compounds based on the biodegradability under aerobic conditions. While BIOWIN considered the biodegradability of a compound only, the half life equation used in this study was modified by biodegradability, sorption and cometabolic oxidation. It was possible that the potential future pharmaceutical compounds were more accurately estimated using the modified half life equation. The modified half life equation considered sorption and cometabolic oxidation of halogenated aromatic/aliphatics and nitrogen(N)-heterocyclic aromatics in the sub-surface, while EPI Suite (BIOWIN) did not. Halogenated aliphatics in chemicals were more persistent than halogenated aromatics in the sub-surface. In addition, in the sub-surface environment, the fates of organic chemicals were much more affected by halogenation in chemicals than by nitrogen(N)-heterocyclic aromatics. © 2013.

  20. Estimating the effect of lay knowledge and prior contact with pulmonary TB patients, on health-belief model in a high-risk pulmonary TB transmission population.

    Science.gov (United States)

    Zein, Rizqy Amelia; Suhariadi, Fendy; Hendriani, Wiwin

    2017-01-01

    The research aimed to investigate the effect of lay knowledge of pulmonary tuberculosis (TB) and prior contact with pulmonary TB patients on a health-belief model (HBM) as well as to identify the social determinants that affect lay knowledge. Survey research design was conducted, where participants were required to fill in a questionnaire, which measured HBM and lay knowledge of pulmonary TB. Research participants were 500 residents of Semampir, Asemrowo, Bubutan, Pabean Cantian, and Simokerto districts, where the risk of pulmonary TB transmission is higher than other districts in Surabaya. Being a female, older in age, and having prior contact with pulmonary TB patients significantly increase the likelihood of having a higher level of lay knowledge. Lay knowledge is a substantial determinant to estimate belief in the effectiveness of health behavior and personal health threat. Prior contact with pulmonary TB patients is able to explain the belief in the effectiveness of a health behavior, yet fails to estimate participants' belief in the personal health threat. Health authorities should prioritize males and young people as their main target groups in a pulmonary TB awareness campaign. The campaign should be able to reconstruct people's misconception about pulmonary TB, thereby bringing around the health-risk perception so that it is not solely focused on improving lay knowledge.

  1. Estimation of curve number by DAWAST model

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Tai Cheol; Park, Seung Ki; Moon, Jong Pil [Chungnam National University, Taejon (Korea, Republic of)

    1997-10-31

    It is one of the most important factors to determine the effective rainfall for estimation of flood hydrograph in design schedule. SCS curve number (CN) method has been frequently used to estimate the effective rainfall of synthesized design flood hydrograph for hydraulic structures. But, it should be cautious to apply SCS-CN originally developed in U.S.A to watersheds in Korea, because characteristics of watersheds in Korea and cropping patterns especially like a paddy land cultivation are quite different from those in USA. New CN method has been introduced. Maximum storage capacity which was herein defined as U{sub max} can be calibrated from the stream flow data and converted to new CN-I of driest condition of soil moisture in the given watershed. Effective rainfall for design flood hydrograph can be estimated by the curve number developed in the watersheds in Korea. (author). 14 refs., 5 tabs., 3 figs.

  2. Sparse estimation of polynomial dynamical models

    NARCIS (Netherlands)

    Toth, R.; Hjalmarsson, H.; Rojas, C.R.; Kinnaert, M.

    2012-01-01

    In many practical situations, it is highly desirable to estimate an accurate mathematical model of a real system using as few parameters as possible. This can be motivated either from appealing to a parsimony principle (Occam's razor) or from the view point of the utilization complexity in terms of

  3. A General Model for Estimating Macroevolutionary Landscapes.

    Science.gov (United States)

    Boucher, Florian C; Démery, Vincent; Conti, Elena; Harmon, Luke J; Uyeda, Josef

    2018-03-01

    The evolution of quantitative characters over long timescales is often studied using stochastic diffusion models. The current toolbox available to students of macroevolution is however limited to two main models: Brownian motion and the Ornstein-Uhlenbeck process, plus some of their extensions. Here, we present a very general model for inferring the dynamics of quantitative characters evolving under both random diffusion and deterministic forces of any possible shape and strength, which can accommodate interesting evolutionary scenarios like directional trends, disruptive selection, or macroevolutionary landscapes with multiple peaks. This model is based on a general partial differential equation widely used in statistical mechanics: the Fokker-Planck equation, also known in population genetics as the Kolmogorov forward equation. We thus call the model FPK, for Fokker-Planck-Kolmogorov. We first explain how this model can be used to describe macroevolutionary landscapes over which quantitative traits evolve and, more importantly, we detail how it can be fitted to empirical data. Using simulations, we show that the model has good behavior both in terms of discrimination from alternative models and in terms of parameter inference. We provide R code to fit the model to empirical data using either maximum-likelihood or Bayesian estimation, and illustrate the use of this code with two empirical examples of body mass evolution in mammals. FPK should greatly expand the set of macroevolutionary scenarios that can be studied since it opens the way to estimating macroevolutionary landscapes of any conceivable shape. [Adaptation; bounds; diffusion; FPK model; macroevolution; maximum-likelihood estimation; MCMC methods; phylogenetic comparative data; selection.].

  4. Modelling maximum likelihood estimation of availability

    International Nuclear Information System (INIS)

    Waller, R.A.; Tietjen, G.L.; Rock, G.W.

    1975-01-01

    Suppose the performance of a nuclear powered electrical generating power plant is continuously monitored to record the sequence of failure and repairs during sustained operation. The purpose of this study is to assess one method of estimating the performance of the power plant when the measure of performance is availability. That is, we determine the probability that the plant is operational at time t. To study the availability of a power plant, we first assume statistical models for the variables, X and Y, which denote the time-to-failure and the time-to-repair variables, respectively. Once those statistical models are specified, the availability, A(t), can be expressed as a function of some or all of their parameters. Usually those parameters are unknown in practice and so A(t) is unknown. This paper discusses the maximum likelihood estimator of A(t) when the time-to-failure model for X is an exponential density with parameter, lambda, and the time-to-repair model for Y is an exponential density with parameter, theta. Under the assumption of exponential models for X and Y, it follows that the instantaneous availability at time t is A(t)=lambda/(lambda+theta)+theta/(lambda+theta)exp[-[(1/lambda)+(1/theta)]t] with t>0. Also, the steady-state availability is A(infinity)=lambda/(lambda+theta). We use the observations from n failure-repair cycles of the power plant, say X 1 , X 2 , ..., Xsub(n), Y 1 , Y 2 , ..., Ysub(n) to present the maximum likelihood estimators of A(t) and A(infinity). The exact sampling distributions for those estimators and some statistical properties are discussed before a simulation model is used to determine 95% simulation intervals for A(t). The methodology is applied to two examples which approximate the operating history of two nuclear power plants. (author)

  5. High-dimensional model estimation and model selection

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    I will review concepts and algorithms from high-dimensional statistics for linear model estimation and model selection. I will particularly focus on the so-called p>>n setting where the number of variables p is much larger than the number of samples n. I will focus mostly on regularized statistical estimators that produce sparse models. Important examples include the LASSO and its matrix extension, the Graphical LASSO, and more recent non-convex methods such as the TREX. I will show the applicability of these estimators in a diverse range of scientific applications, such as sparse interaction graph recovery and high-dimensional classification and regression problems in genomics.

  6. A marginal structural model to estimate the causal effect of antidepressant medication treatment on viral suppression among homeless and marginally housed persons with HIV.

    Science.gov (United States)

    Tsai, Alexander C; Weiser, Sheri D; Petersen, Maya L; Ragland, Kathleen; Kushel, Margot B; Bangsberg, David R

    2010-12-01

    Depression strongly predicts nonadherence to human immunodeficiency virus (HIV) antiretroviral therapy, and adherence is essential to maintaining viral suppression. This suggests that pharmacologic treatment of depression may improve virologic outcomes. However, previous longitudinal observational analyses have inadequately adjusted for time-varying confounding by depression severity, which could yield biased estimates of treatment effect. Application of marginal structural modeling to longitudinal observation data can, under certain assumptions, approximate the findings of a randomized controlled trial. To determine whether antidepressant medication treatment increases the probability of HIV viral suppression. Community-based prospective cohort study with assessments conducted every 3 months. Community-based research field site in San Francisco, California. One hundred fifty-eight homeless and marginally housed persons with HIV who met baseline immunologic (CD4+ T-lymphocyte count, 13) inclusion criteria, observed from April 2002 through August 2007. Probability of achieving viral suppression to less than 50 copies/mL. Secondary outcomes of interest were probability of being on an antiretroviral therapy regimen, 7-day self-reported percentage adherence to antiretroviral therapy, and probability of reporting complete (100%) adherence. Marginal structural models estimated a 2.03 greater odds of achieving viral suppression (95% confidence interval [CI], 1.15-3.58; P = .02) resulting from antidepressant medication treatment. In addition, antidepressant medication use increased the probability of antiretroviral uptake (weighted odds ratio, 3.87; 95% CI, 1.98-7.58; P effect is likely attributable to improved adherence to a continuum of HIV care, including increased uptake and adherence to antiretroviral therapy.

  7. Aerosol direct radiative effects over the northwest Atlantic, northwest Pacific, and North Indian Oceans: estimates based on in-situ chemical and optical measurements and chemical transport modeling

    Directory of Open Access Journals (Sweden)

    T. S. Bates

    2006-01-01

    Full Text Available The largest uncertainty in the radiative forcing of climate change over the industrial era is that due to aerosols, a substantial fraction of which is the uncertainty associated with scattering and absorption of shortwave (solar radiation by anthropogenic aerosols in cloud-free conditions (IPCC, 2001. Quantifying and reducing the uncertainty in aerosol influences on climate is critical to understanding climate change over the industrial period and to improving predictions of future climate change for assumed emission scenarios. Measurements of aerosol properties during major field campaigns in several regions of the globe during the past decade are contributing to an enhanced understanding of atmospheric aerosols and their effects on light scattering and climate. The present study, which focuses on three regions downwind of major urban/population centers (North Indian Ocean (NIO during INDOEX, the Northwest Pacific Ocean (NWP during ACE-Asia, and the Northwest Atlantic Ocean (NWA during ICARTT, incorporates understanding gained from field observations of aerosol distributions and properties into calculations of perturbations in radiative fluxes due to these aerosols. This study evaluates the current state of observations and of two chemical transport models (STEM and MOZART. Measurements of burdens, extinction optical depth (AOD, and direct radiative effect of aerosols (DRE – change in radiative flux due to total aerosols are used as measurement-model check points to assess uncertainties. In-situ measured and remotely sensed aerosol properties for each region (mixing state, mass scattering efficiency, single scattering albedo, and angular scattering properties and their dependences on relative humidity are used as input parameters to two radiative transfer models (GFDL and University of Michigan to constrain estimates of aerosol radiative effects, with uncertainties in each step propagated through the analysis. Constraining the radiative

  8. Estimating Coastal Digital Elevation Model (DEM) Uncertainty

    Science.gov (United States)

    Amante, C.; Mesick, S.

    2017-12-01

    Integrated bathymetric-topographic digital elevation models (DEMs) are representations of the Earth's solid surface and are fundamental to the modeling of coastal processes, including tsunami, storm surge, and sea-level rise inundation. Deviations in elevation values from the actual seabed or land surface constitute errors in DEMs, which originate from numerous sources, including: (i) the source elevation measurements (e.g., multibeam sonar, lidar), (ii) the interpolative gridding technique (e.g., spline, kriging) used to estimate elevations in areas unconstrained by source measurements, and (iii) the datum transformation used to convert bathymetric and topographic data to common vertical reference systems. The magnitude and spatial distribution of the errors from these sources are typically unknown, and the lack of knowledge regarding these errors represents the vertical uncertainty in the DEM. The National Oceanic and Atmospheric Administration (NOAA) National Centers for Environmental Information (NCEI) has developed DEMs for more than 200 coastal communities. This study presents a methodology developed at NOAA NCEI to derive accompanying uncertainty surfaces that estimate DEM errors at the individual cell-level. The development of high-resolution (1/9th arc-second), integrated bathymetric-topographic DEMs along the southwest coast of Florida serves as the case study for deriving uncertainty surfaces. The estimated uncertainty can then be propagated into the modeling of coastal processes that utilize DEMs. Incorporating the uncertainty produces more reliable modeling results, and in turn, better-informed coastal management decisions.

  9. Consistent Estimation of Partition Markov Models

    Directory of Open Access Journals (Sweden)

    Jesús E. García

    2017-04-01

    Full Text Available The Partition Markov Model characterizes the process by a partition L of the state space, where the elements in each part of L share the same transition probability to an arbitrary element in the alphabet. This model aims to answer the following questions: what is the minimal number of parameters needed to specify a Markov chain and how to estimate these parameters. In order to answer these questions, we build a consistent strategy for model selection which consist of: giving a size n realization of the process, finding a model within the Partition Markov class, with a minimal number of parts to represent the process law. From the strategy, we derive a measure that establishes a metric in the state space. In addition, we show that if the law of the process is Markovian, then, eventually, when n goes to infinity, L will be retrieved. We show an application to model internet navigation patterns.

  10. Los Alamos Waste Management Cost Estimation Model

    International Nuclear Information System (INIS)

    Matysiak, L.M.; Burns, M.L.

    1994-03-01

    This final report completes the Los Alamos Waste Management Cost Estimation Project, and includes the documentation of the waste management processes at Los Alamos National Laboratory (LANL) for hazardous, mixed, low-level radioactive solid and transuranic waste, development of the cost estimation model and a user reference manual. The ultimate goal of this effort was to develop an estimate of the life cycle costs for the aforementioned waste types. The Cost Estimation Model is a tool that can be used to calculate the costs of waste management at LANL for the aforementioned waste types, under several different scenarios. Each waste category at LANL is managed in a separate fashion, according to Department of Energy requirements and state and federal regulations. The cost of the waste management process for each waste category has not previously been well documented. In particular, the costs associated with the handling, treatment and storage of the waste have not been well understood. It is anticipated that greater knowledge of these costs will encourage waste generators at the Laboratory to apply waste minimization techniques to current operations. Expected benefits of waste minimization are a reduction in waste volume, decrease in liability and lower waste management costs

  11. The use of long-term observations in combination with modeling and their effect on the estimation of the North Sea storm surge climate

    Energy Technology Data Exchange (ETDEWEB)

    Aspelien, T.

    2006-07-01

    The aim of this PhD thesis is to design, implement and assess a method to combine long-term observations with multi-decadal model simulations. In this work a computationally cost-efficient nudging method, well suited for multi-decadal simulations, is chosen. First, the nudging method was tested for its sensitivity to different parameters. Then the long-term observations of sea level height from the UK tide gauge Aberdeen were combined with a multi-decadal hindcast for the North Sea. Compared to a control simulation, in which no observed values of sea level height were combined with the model, the nudging method generally improves the modeled water levels with respect to the observed values, especially for surge. The estimation of long-term fluctuations and biases of extreme values of high waters in the nudged simulation are generally considerably improved after nudging and closer to the observed. The effect is largest in the German Bight and at the West coast of Denmark. It is concluded that the cost-efficient nudging method, in which external processes, such as external surges, are additionally taken into account, provides a considerable improvement in reproducing long-term variations and trends, especially for surge. Without additional data from e.g. observed values from tide gauges taken into account, the meteorological induced long-term variations in a hindcast are not fully captured. (orig.)

  12. Modeling SMAP Spacecraft Attitude Control Estimation Error Using Signal Generation Model

    Science.gov (United States)

    Rizvi, Farheen

    2016-01-01

    Two ground simulation software are used to model the SMAP spacecraft dynamics. The CAST software uses a higher fidelity model than the ADAMS software. The ADAMS software models the spacecraft plant, controller and actuator models, and assumes a perfect sensor and estimator model. In this simulation study, the spacecraft dynamics results from the ADAMS software are used as CAST software is unavailable. The main source of spacecraft dynamics error in the higher fidelity CAST software is due to the estimation error. A signal generation model is developed to capture the effect of this estimation error in the overall spacecraft dynamics. Then, this signal generation model is included in the ADAMS software spacecraft dynamics estimate such that the results are similar to CAST. This signal generation model has similar characteristics mean, variance and power spectral density as the true CAST estimation error. In this way, ADAMS software can still be used while capturing the higher fidelity spacecraft dynamics modeling from CAST software.

  13. The Use of Piecewise Growth Models to Estimate Learning Trajectories and RtI Instructional Effects in a Comparative Interrupted Time-Series Design

    Science.gov (United States)

    Zvoch, Keith

    2016-01-01

    Piecewise growth models (PGMs) were used to estimate and model changes in the preliteracy skill development of kindergartners in a moderately sized school district in the Pacific Northwest. PGMs were applied to interrupted time-series (ITS) data that arose within the context of a response-to-intervention (RtI) instructional framework. During the…

  14. Estimating the long-term effects of in vitro fertilization in Greece: an analysis based on a lifetime-investment model

    Directory of Open Access Journals (Sweden)

    Fragoulakis V

    2013-06-01

    Full Text Available Vassilis Fragoulakis, Nikolaos ManiadakisNational School of Public Health, Department of Health Services Management, Athens, GreeceObjective: To quantify the economic effects of a child conceived by in vitro fertilization (IVF in terms of net tax revenue from the state's perspective in Greece.Methods: Based on previous international experience, a mathematical model was developed to assess the lifetime productivity of a single individual and his/her lifetime transactions with governmental agencies. The model distinguished among three periods in the economic life cycle of an individual: (1 early life, when the government primarily contributes resources through child tax credits, health care, and educational expenses; (2 employment, when individuals begin returning resources through taxes; and (3 retirement, when the government expends additional resources on pensions and health care. The cost of a live birth with IVF was based on the modification of a previously published model developed by the authors. All outcomes were discounted at a 3% discount rate. The data inputs – namely, the economic or demographic variables – were derived from the National Statistical Secretariat of Greece and other relevant sources. To deal with uncertainty, bias-corrected uncertainty intervals (UIs were calculated based on 5000 Monte Carlo simulations. In addition, to examine the robustness of our results, other one-way sensitivity analyses were also employed.Results: The cost of IVF per birth was estimated at €17,015 (95% UI: €13,932–€20,200. The average projected income generated by an individual throughout his/her productive life was €258,070 (95% UI: €185,376–€339,831. In addition, his/her life tax contribution was estimated at €133,947 (95% UI: €100,126–€177,375, while the discounted governmental expenses for elderly and underage individuals were €67,624 (95% UI: €55,211–€83,930. Hence, the net present value of IVF was €60

  15. Study on the estimation of probabilistic effective dose. Committed effective dose from intake of marine products using Oceanic General Circulation Model

    International Nuclear Information System (INIS)

    Nakano, Masanao

    2007-01-01

    The worldwide environmental protection is required by the public. A long-term environmental assessment from nuclear fuel cycle facilities to the aquatic environment also becomes more important to utilize nuclear energy more efficiently. Evaluation of long-term risk including not only in Japan but also in neighboring countries is considered to be necessary in order to develop nuclear power industry. The author successfully simulated the distribution of radionuclides in seawater and seabed sediment produced by atmospheric nuclear tests using LAMER (Long-term Assessment ModEl for Radioactivity in the oceans). A part of the LAMER calculated the advection- diffusion-scavenging processes for radionuclides in the oceans and the Japan Sea in cooperate with Oceanic General Circulation Model (OGCM) and was validated. The author is challenging to calculate probabilistic effective dose suggested by ICRP from intake of marine products due to atmospheric nuclear tests using the Monte Carlo method in the other part of LAMER. Depending on the deviation of each parameter, the 95th percentile of the probabilistic effective dose was calculated about half of the 95th percentile of the deterministic effective dose in proforma calculation. The probabilistic assessment gives realistic value for the dose assessment of a nuclear fuel cycle facility. (author)

  16. Estimates of volume and magma input in crustal magmatic systems from zircon geochronology: the effect of modelling assumptions and system variables

    Directory of Open Access Journals (Sweden)

    Luca eCaricchi

    2016-04-01

    Full Text Available Magma fluxes in the Earth’s crust play an important role in regulating the relationship between the frequency and magnitude of volcanic eruptions, the chemical evolution of magmatic systems and the distribution of geothermal energy and mineral resources on our planet. Therefore, quantifying magma productivity and the rate of magma transfer within the crust can provide valuable insights to characterise the long-term behaviour of volcanic systems and to unveil the link between the physical and chemical evolution of magmatic systems and their potential to generate resources. We performed thermal modelling to compute the temperature evolution of crustal magmatic intrusions with different final volumes assembled over a variety of timescales (i.e., at different magma fluxes. Using these results, we calculated synthetic populations of zircon ages assuming the number of zircons crystallising in a given time period is directly proportional to the volume of magma at temperature within the zircon crystallisation range. The statistical analysis of the calculated populations of zircon ages shows that the mode, median and standard deviation of the populations varies coherently as function of the rate of magma injection and final volume of the crustal intrusions. Therefore, the statistical properties of the population of zircon ages can add useful constraints to quantify the rate of magma injection and the final volume of magmatic intrusions.Here, we explore the effect of different ranges of zircon saturation temperature, intrusion geometry, and wall rock temperature on the calculated distributions of zircon ages. Additionally, we determine the effect of undersampling on the variability of mode, median and standards deviation of calculated populations of zircon ages to estimate the minimum number of zircon analyses necessary to obtain meaningful estimates of magma flux and final intrusion volume.

  17. Estimation and prediction under local volatility jump-diffusion model

    Science.gov (United States)

    Kim, Namhyoung; Lee, Younhee

    2018-02-01

    Volatility is an important factor in operating a company and managing risk. In the portfolio optimization and risk hedging using the option, the value of the option is evaluated using the volatility model. Various attempts have been made to predict option value. Recent studies have shown that stochastic volatility models and jump-diffusion models reflect stock price movements accurately. However, these models have practical limitations. Combining them with the local volatility model, which is widely used among practitioners, may lead to better performance. In this study, we propose a more effective and efficient method of estimating option prices by combining the local volatility model with the jump-diffusion model and apply it using both artificial and actual market data to evaluate its performance. The calibration process for estimating the jump parameters and local volatility surfaces is divided into three stages. We apply the local volatility model, stochastic volatility model, and local volatility jump-diffusion model estimated by the proposed method to KOSPI 200 index option pricing. The proposed method displays good estimation and prediction performance.

  18. Estimation of the cost-effectiveness of HIV prevention portfolios for people who inject drugs in the United States: A model-based analysis.

    Directory of Open Access Journals (Sweden)

    Cora L Bernard

    2017-05-01

    Full Text Available The risks of HIV transmission associated with the opioid epidemic make cost-effective programs for people who inject drugs (PWID a public health priority. Some of these programs have benefits beyond prevention of HIV-a critical consideration given that injection drug use is increasing across most United States demographic groups. To identify high-value HIV prevention program portfolios for US PWID, we consider combinations of four interventions with demonstrated efficacy: opioid agonist therapy (OAT, needle and syringe programs (NSPs, HIV testing and treatment (Test & Treat, and oral HIV pre-exposure prophylaxis (PrEP.We adapted an empirically calibrated dynamic compartmental model and used it to assess the discounted costs (in 2015 US dollars, health outcomes (HIV infections averted, change in HIV prevalence, and discounted quality-adjusted life years [QALYs], and incremental cost-effectiveness ratios (ICERs of the four prevention programs, considered singly and in combination over a 20-y time horizon. We obtained epidemiologic, economic, and health utility parameter estimates from the literature, previously published models, and expert opinion. We estimate that expansions of OAT, NSPs, and Test & Treat implemented singly up to 50% coverage levels can be cost-effective relative to the next highest coverage level (low, medium, and high at 40%, 45%, and 50%, respectively and that OAT, which we assume to have immediate and direct health benefits for the individual, has the potential to be the highest value investment, even under scenarios where it prevents fewer infections than other programs. Although a model-based analysis can provide only estimates of health outcomes, we project that, over 20 y, 50% coverage with OAT could avert up to 22,000 (95% CI: 5,200, 46,000 infections and cost US$18,000 (95% CI: US$14,000, US$24,000 per QALY gained, 50% NSP coverage could avert up to 35,000 (95% CI: 8,900, 43,000 infections and cost US$25,000 (95% CI: US

  19. Resource-estimation models and predicted discovery

    International Nuclear Information System (INIS)

    Hill, G.W.

    1982-01-01

    Resources have been estimated by predictive extrapolation from past discovery experience, by analogy with better explored regions, or by inference from evidence of depletion of targets for exploration. Changes in technology and new insights into geological mechanisms have occurred sufficiently often in the long run to form part of the pattern of mature discovery experience. The criterion, that a meaningful resource estimate needs an objective measure of its precision or degree of uncertainty, excludes 'estimates' based solely on expert opinion. This is illustrated by development of error measures for several persuasive models of discovery and production of oil and gas in USA, both annually and in terms of increasing exploration effort. Appropriate generalizations of the models resolve many points of controversy. This is illustrated using two USA data sets describing discovery of oil and of U 3 O 8 ; the latter set highlights an inadequacy of available official data. Review of the oil-discovery data set provides a warrant for adjusting the time-series prediction to a higher resource figure for USA petroleum. (author)

  20. Marginal Maximum Likelihood Estimation of Item Response Models in R

    Directory of Open Access Journals (Sweden)

    Matthew S. Johnson

    2007-02-01

    Full Text Available Item response theory (IRT models are a class of statistical models used by researchers to describe the response behaviors of individuals to a set of categorically scored items. The most common IRT models can be classified as generalized linear fixed- and/or mixed-effect models. Although IRT models appear most often in the psychological testing literature, researchers in other fields have successfully utilized IRT-like models in a wide variety of applications. This paper discusses the three major methods of estimation in IRT and develops R functions utilizing the built-in capabilities of the R environment to find the marginal maximum likelihood estimates of the generalized partial credit model. The currently available R packages ltm is also discussed.

  1. Climate change trade measures : estimating industry effects

    Science.gov (United States)

    2009-06-01

    Estimating the potential effects of domestic emissions pricing for industries in the United States is complex. If the United States were to regulate greenhouse gas emissions, production costs could rise for certain industries and could cause output, ...

  2. Estimating Equilibrium Effects of Job Search Assistance

    DEFF Research Database (Denmark)

    Gautier, Pieter; Muller, Paul; van der Klaauw, Bas

    that the nonparticipants in the experiment regions find jobs slower after the introduction of the activation program (relative to workers in other regions). We then estimate an equilibrium search model. This model shows that a large scale role out of the activation program decreases welfare, while a standard partial...... microeconometric cost-benefit analysis would conclude the opposite....

  3. Parameter estimation in fractional diffusion models

    CERN Document Server

    Kubilius, Kęstutis; Ralchenko, Kostiantyn

    2017-01-01

    This book is devoted to parameter estimation in diffusion models involving fractional Brownian motion and related processes. For many years now, standard Brownian motion has been (and still remains) a popular model of randomness used to investigate processes in the natural sciences, financial markets, and the economy. The substantial limitation in the use of stochastic diffusion models with Brownian motion is due to the fact that the motion has independent increments, and, therefore, the random noise it generates is “white,” i.e., uncorrelated. However, many processes in the natural sciences, computer networks and financial markets have long-term or short-term dependences, i.e., the correlations of random noise in these processes are non-zero, and slowly or rapidly decrease with time. In particular, models of financial markets demonstrate various kinds of memory and usually this memory is modeled by fractional Brownian diffusion. Therefore, the book constructs diffusion models with memory and provides s...

  4. PARAMETER ESTIMATION IN BREAD BAKING MODEL

    Directory of Open Access Journals (Sweden)

    Hadiyanto Hadiyanto

    2012-05-01

    Full Text Available Bread product quality is highly dependent to the baking process. A model for the development of product quality, which was obtained by using quantitative and qualitative relationships, was calibrated by experiments at a fixed baking temperature of 200°C alone and in combination with 100 W microwave powers. The model parameters were estimated in a stepwise procedure i.e. first, heat and mass transfer related parameters, then the parameters related to product transformations and finally product quality parameters. There was a fair agreement between the calibrated model results and the experimental data. The results showed that the applied simple qualitative relationships for quality performed above expectation. Furthermore, it was confirmed that the microwave input is most meaningful for the internal product properties and not for the surface properties as crispness and color. The model with adjusted parameters was applied in a quality driven food process design procedure to derive a dynamic operation pattern, which was subsequently tested experimentally to calibrate the model. Despite the limited calibration with fixed operation settings, the model predicted well on the behavior under dynamic convective operation and on combined convective and microwave operation. It was expected that the suitability between model and baking system could be improved further by performing calibration experiments at higher temperature and various microwave power levels.  Abstrak  PERKIRAAN PARAMETER DALAM MODEL UNTUK PROSES BAKING ROTI. Kualitas produk roti sangat tergantung pada proses baking yang digunakan. Suatu model yang telah dikembangkan dengan metode kualitatif dan kuantitaif telah dikalibrasi dengan percobaan pada temperatur 200oC dan dengan kombinasi dengan mikrowave pada 100 Watt. Parameter-parameter model diestimasi dengan prosedur bertahap yaitu pertama, parameter pada model perpindahan masa dan panas, parameter pada model transformasi, dan

  5. Modeling of the effect of tool wear per discharge estimation error on the depth of machined cavities in micro-EDM milling

    DEFF Research Database (Denmark)

    Puthumana, Govindan; Bissacco, Giuliano; Hansen, Hans Nørgaard

    2017-01-01

    In micro-EDM milling, real time electrode wear compensation based on tool wear per discharge (TWD) estimation permits the direct control of the position of the tool electrode frontal surface. However, TWD estimation errors will cause errors on the tool electrode axial depth. A simulation tool...... is developed to determine the effects of errors in the initial estimation of TWD and its propagation effect with respect to the error on the depth of the cavity generated. Simulations were applied to micro-EDM milling of a slot of 5000 μm length and 50 μm depth and validated through slot milling experiments...... performed on a micro-EDM machine. Simulations and experimental results were found to be in good agreement, showing the effect of errror amplification through the cavity depth....

  6. Coupling Hydrologic and Hydrodynamic Models to Estimate PMF

    Science.gov (United States)

    Felder, G.; Weingartner, R.

    2015-12-01

    Most sophisticated probable maximum flood (PMF) estimations derive the PMF from the probable maximum precipitation (PMP) by applying deterministic hydrologic models calibrated with observed data. This method is based on the assumption that the hydrological system is stationary, meaning that the system behaviour during the calibration period or the calibration event is presumed to be the same as it is during the PMF. However, as soon as a catchment-specific threshold is reached, the system is no longer stationary. At or beyond this threshold, retention areas, new flow paths, and changing runoff processes can strongly affect downstream peak discharge. These effects can be accounted for by coupling hydrologic and hydrodynamic models, a technique that is particularly promising when the expected peak discharge may considerably exceed the observed maximum discharge. In such cases, the coupling of hydrologic and hydraulic models has the potential to significantly increase the physical plausibility of PMF estimations. This procedure ensures both that the estimated extreme peak discharge does not exceed the physical limit based on riverbed capacity and that the dampening effect of inundation processes on peak discharge is considered. Our study discusses the prospect of considering retention effects on PMF estimations by coupling hydrologic and hydrodynamic models. This method is tested by forcing PREVAH, a semi-distributed deterministic hydrological model, with randomly generated, physically plausible extreme precipitation patterns. The resulting hydrographs are then used to externally force the hydraulic model BASEMENT-ETH (riverbed in 1D, potential inundation areas in 2D). Finally, the PMF estimation results obtained using the coupled modelling approach are compared to the results obtained using ordinary hydrologic modelling.

  7. Determining input values for a simple parametric model to estimate ...

    African Journals Online (AJOL)

    Estimating soil evaporation (Es) is an important part of modelling vineyard evapotranspiration for irrigation purposes. Furthermore, quantification of possible soil texture and trellis effects is essential. Daily Es from six topsoils packed into lysimeters was measured under grapevines on slanting and vertical trellises, ...

  8. Revised models and genetic parameter estimates for production and ...

    African Journals Online (AJOL)

    Genetic parameters for production and reproduction traits in the Elsenburg Dormer sheep stud were estimated using records of 11743 lambs born between 1943 and 2002. An animal model with direct and maternal additive, maternal permanent and temporary environmental effects was fitted for traits considered traits of the ...

  9. Adaptive Estimation of Heteroscedastic Money Demand Model of Pakistan

    Directory of Open Access Journals (Sweden)

    Muhammad Aslam

    2007-07-01

    Full Text Available For the problem of estimation of Money demand model of Pakistan, money supply (M1 shows heteroscedasticity of the unknown form. For estimation of such model we compare two adaptive estimators with ordinary least squares estimator and show the attractive performance of the adaptive estimators, namely, nonparametric kernel estimator and nearest neighbour regression estimator. These comparisons are made on the basis standard errors of the estimated coefficients, standard error of regression, Akaike Information Criteria (AIC value, and the Durban-Watson statistic for autocorrelation. We further show that nearest neighbour regression estimator performs better when comparing with the other nonparametric kernel estimator.

  10. Development on electromagnetic impedance function modeling and its estimation

    Energy Technology Data Exchange (ETDEWEB)

    Sutarno, D., E-mail: Sutarno@fi.itb.ac.id [Earth Physics and Complex System Division Faculty of Mathematics and Natural Sciences Institut Teknologi Bandung (Indonesia)

    2015-09-30

    Today the Electromagnetic methods such as magnetotellurics (MT) and controlled sources audio MT (CSAMT) is used in a broad variety of applications. Its usefulness in poor seismic areas and its negligible environmental impact are integral parts of effective exploration at minimum cost. As exploration was forced into more difficult areas, the importance of MT and CSAMT, in conjunction with other techniques, has tended to grow continuously. However, there are obviously important and difficult problems remaining to be solved concerning our ability to collect process and interpret MT as well as CSAMT in complex 3D structural environments. This talk aim at reviewing and discussing the recent development on MT as well as CSAMT impedance functions modeling, and also some improvements on estimation procedures for the corresponding impedance functions. In MT impedance modeling, research efforts focus on developing numerical method for computing the impedance functions of three dimensionally (3-D) earth resistivity models. On that reason, 3-D finite elements numerical modeling for the impedances is developed based on edge element method. Whereas, in the CSAMT case, the efforts were focused to accomplish the non-plane wave problem in the corresponding impedance functions. Concerning estimation of MT and CSAMT impedance functions, researches were focused on improving quality of the estimates. On that objective, non-linear regression approach based on the robust M-estimators and the Hilbert transform operating on the causal transfer functions, were used to dealing with outliers (abnormal data) which are frequently superimposed on a normal ambient MT as well as CSAMT noise fields. As validated, the proposed MT impedance modeling method gives acceptable results for standard three dimensional resistivity models. Whilst, the full solution based modeling that accommodate the non-plane wave effect for CSAMT impedances is applied for all measurement zones, including near-, transition

  11. The problem of multicollinearity in horizontal solar radiation estimation models and a new model for Turkey

    International Nuclear Information System (INIS)

    Demirhan, Haydar

    2014-01-01

    Highlights: • Impacts of multicollinearity on solar radiation estimation models are discussed. • Accuracy of existing empirical models for Turkey is evaluated. • A new non-linear model for the estimation of average daily horizontal global solar radiation is proposed. • Estimation and prediction performance of the proposed and existing models are compared. - Abstract: Due to the considerable decrease in energy resources and increasing energy demand, solar energy is an appealing field of investment and research. There are various modelling strategies and particular models for the estimation of the amount of solar radiation reaching at a particular point over the Earth. In this article, global solar radiation estimation models are taken into account. To emphasize severity of multicollinearity problem in solar radiation estimation models, some of the models developed for Turkey are revisited. It is observed that these models have been identified as accurate under certain multicollinearity structures, and when the multicollinearity is eliminated, the accuracy of these models is controversial. Thus, a reliable model that does not suffer from multicollinearity and gives precise estimates of global solar radiation for the whole region of Turkey is necessary. A new nonlinear model for the estimation of average daily horizontal solar radiation is proposed making use of the genetic programming technique. There is no multicollinearity problem in the new model, and its estimation accuracy is better than the revisited models in terms of numerous statistical performance measures. According to the proposed model, temperature, precipitation, altitude, longitude, and monthly average daily extraterrestrial horizontal solar radiation have significant effect on the average daily global horizontal solar radiation. Relative humidity and soil temperature are not included in the model due to their high correlation with precipitation and temperature, respectively. While altitude has

  12. Perspectives on Modelling BIM-enabled Estimating Practices

    Directory of Open Access Journals (Sweden)

    Willy Sher

    2014-12-01

    Full Text Available BIM-enabled estimating processes do not replace or provide a substitute for the traditional approaches used in the architecture, engineering and construction industries. This paper explores the impact of BIM on these traditional processes.  It identifies differences between the approaches used with BIM and other conventional methods, and between the various construction professionals that prepare estimates. We interviewed 17 construction professionals from client organizations, contracting organizations, consulting practices and specialist-project firms. Our analyses highlight several logical relationships between estimating processes and BIM attributes. Estimators need to respond to the challenges BIM poses to traditional estimating practices. BIM-enabled estimating circumvents long-established conventions and traditional approaches, and focuses on data management.  Consideration needs to be given to the model data required for estimating, to the means by which these data may be harnessed when exported, to the means by which the integrity of model data are protected, to the creation and management of tools that work effectively and efficiently in multi-disciplinary settings, and to approaches that narrow the gap between virtual reality and actual reality.  Areas for future research are also identified in the paper.

  13. Advanced empirical estimate of information value for credit scoring models

    Directory of Open Access Journals (Sweden)

    Martin Řezáč

    2011-01-01

    Full Text Available Credit scoring, it is a term for a wide spectrum of predictive models and their underlying techniques that aid financial institutions in granting credits. These methods decide who will get credit, how much credit they should get, and what further strategies will enhance the profitability of the borrowers to the lenders. Many statistical tools are avaiable for measuring quality, within the meaning of the predictive power, of credit scoring models. Because it is impossible to use a scoring model effectively without knowing how good it is, quality indexes like Gini, Kolmogorov-Smirnov statisic and Information value are used to assess quality of given credit scoring model. The paper deals primarily with the Information value, sometimes called divergency. Commonly it is computed by discretisation of data into bins using deciles. One constraint is required to be met in this case. Number of cases have to be nonzero for all bins. If this constraint is not fulfilled there are some practical procedures for preserving finite results. As an alternative method to the empirical estimates one can use the kernel smoothing theory, which allows to estimate unknown densities and consequently, using some numerical method for integration, to estimate value of the Information value. The main contribution of this paper is a proposal and description of the empirical estimate with supervised interval selection. This advanced estimate is based on requirement to have at least k, where k is a positive integer, observations of socres of both good and bad client in each considered interval. A simulation study shows that this estimate outperform both the empirical estimate using deciles and the kernel estimate. Furthermore it shows high dependency on choice of the parameter k. If we choose too small value, we get overestimated value of the Information value, and vice versa. Adjusted square root of number of bad clients seems to be a reasonable compromise.

  14. Application of a multiple scattering model to estimate optical depth, lidar ratio and ice crystal effective radius of cirrus clouds observed with lidar.

    Directory of Open Access Journals (Sweden)

    Gouveia Diego

    2018-01-01

    Full Text Available Lidar measurements of cirrus clouds are highly influenced by multiple scattering (MS. We therefore developed an iterative approach to correct elastic backscatter lidar signals for multiple scattering to obtain best estimates of single-scattering cloud optical depth and lidar ratio as well as of the ice crystal effective radius. The approach is based on the exploration of the effect of MS on the molecular backscatter signal returned from above cloud top.

  15. Application of a multiple scattering model to estimate optical depth, lidar ratio and ice crystal effective radius of cirrus clouds observed with lidar.

    Science.gov (United States)

    Gouveia, Diego; Baars, Holger; Seifert, Patric; Wandinger, Ulla; Barbosa, Henrique; Barja, Boris; Artaxo, Paulo; Lopes, Fabio; Landulfo, Eduardo; Ansmann, Albert

    2018-04-01

    Lidar measurements of cirrus clouds are highly influenced by multiple scattering (MS). We therefore developed an iterative approach to correct elastic backscatter lidar signals for multiple scattering to obtain best estimates of single-scattering cloud optical depth and lidar ratio as well as of the ice crystal effective radius. The approach is based on the exploration of the effect of MS on the molecular backscatter signal returned from above cloud top.

  16. Models for estimating photosynthesis parameters from in situ production profiles

    Science.gov (United States)

    Kovač, Žarko; Platt, Trevor; Sathyendranath, Shubha; Antunović, Suzana

    2017-12-01

    The rate of carbon assimilation in phytoplankton primary production models is mathematically prescribed with photosynthesis irradiance functions, which convert a light flux (energy) into a material flux (carbon). Information on this rate is contained in photosynthesis parameters: the initial slope and the assimilation number. The exactness of parameter values is crucial for precise calculation of primary production. Here we use a model of the daily production profile based on a suite of photosynthesis irradiance functions and extract photosynthesis parameters from in situ measured daily production profiles at the Hawaii Ocean Time-series station Aloha. For each function we recover parameter values, establish parameter distributions and quantify model skill. We observe that the choice of the photosynthesis irradiance function to estimate the photosynthesis parameters affects the magnitudes of parameter values as recovered from in situ profiles. We also tackle the problem of parameter exchange amongst the models and the effect it has on model performance. All models displayed little or no bias prior to parameter exchange, but significant bias following parameter exchange. The best model performance resulted from using optimal parameter values. Model formulation was extended further by accounting for spectral effects and deriving a spectral analytical solution for the daily production profile. The daily production profile was also formulated with time dependent growing biomass governed by a growth equation. The work on parameter recovery was further extended by exploring how to extract photosynthesis parameters from information on watercolumn production. It was demonstrated how to estimate parameter values based on a linearization of the full analytical solution for normalized watercolumn production and from the solution itself, without linearization. The paper complements previous works on photosynthesis irradiance models by analysing the skill and consistency of

  17. Estimation of available water capacity components of two-layered soils using crop model inversion: Effect of crop type and water regime

    Science.gov (United States)

    Sreelash, K.; Buis, Samuel; Sekhar, M.; Ruiz, Laurent; Kumar Tomer, Sat; Guérif, Martine

    2017-03-01

    Characterization of the soil water reservoir is critical for understanding the interactions between crops and their environment and the impacts of land use and environmental changes on the hydrology of agricultural catchments especially in tropical context. Recent studies have shown that inversion of crop models is a powerful tool for retrieving information on root zone properties. Increasing availability of remotely sensed soil and vegetation observations makes it well suited for large scale applications. The potential of this methodology has however never been properly evaluated on extensive experimental datasets and previous studies suggested that the quality of estimation of soil hydraulic properties may vary depending on agro-environmental situations. The objective of this study was to evaluate this approach on an extensive field experiment. The dataset covered four crops (sunflower, sorghum, turmeric, maize) grown on different soils and several years in South India. The components of AWC (available water capacity) namely soil water content at field capacity and wilting point, and soil depth of two-layered soils were estimated by inversion of the crop model STICS with the GLUE (generalized likelihood uncertainty estimation) approach using observations of surface soil moisture (SSM; typically from 0 to 10 cm deep) and leaf area index (LAI), which are attainable from radar remote sensing in tropical regions with frequent cloudy conditions. The results showed that the quality of parameter estimation largely depends on the hydric regime and its interaction with crop type. A mean relative absolute error of 5% for field capacity of surface layer, 10% for field capacity of root zone, 15% for wilting point of surface layer and root zone, and 20% for soil depth can be obtained in favorable conditions. A few observations of SSM (during wet and dry soil moisture periods) and LAI (within water stress periods) were sufficient to significantly improve the estimation of AWC

  18. ANFIS-Based Modeling for Photovoltaic Characteristics Estimation

    Directory of Open Access Journals (Sweden)

    Ziqiang Bi

    2016-09-01

    Full Text Available Due to the high cost of photovoltaic (PV modules, an accurate performance estimation method is significantly valuable for studying the electrical characteristics of PV generation systems. Conventional analytical PV models are usually composed by nonlinear exponential functions and a good number of unknown parameters must be identified before using. In this paper, an adaptive-network-based fuzzy inference system (ANFIS based modeling method is proposed to predict the current-voltage characteristics of PV modules. The effectiveness of the proposed modeling method is evaluated through comparison with Villalva’s model, radial basis function neural networks (RBFNN based model and support vector regression (SVR based model. Simulation and experimental results confirm both the feasibility and the effectiveness of the proposed method.

  19. Modeling of the fatigue damage accumulation processes in the material of NPP design units under thermomechanical unstationary effects. Estimation of spent life and forecast of residual life

    International Nuclear Information System (INIS)

    Kiriushin, A.I.; Korotkikh, Yu.G.; Gorodov, G.F.

    2002-01-01

    Full text: The estimation problems of spent life and forecast of residual life of NPP equipment design units, operated at unstationary thermal force loads are considered. These loads are, as a rule, unregular and cause rotation of main stress tensor platforms of the most loaded zones of structural elements and viscoelastic plastic deformation of material in the places of stresses concentrations. The existing engineering approaches to the damages accumulation processes calculation in the material of structural units, their advantages and disadvantages are analyzed. For the processes of fatigue damages accumulation a model is proposed, which allows to take into account the unregular pattern of deformation multiaxiality of stressed state, rotation of main platforms, non-linear summation of damages at the loading mode change. The model in based on the equations of damaged medium mechanics, including the equations of viscoplastic deformation of the material and evolutionary equations of damages accumulation. The algorithms of spent life estimation and residual life forecast of the controlled equipment and systems zones are made on the bases of the given model by the known real history of loading, which is determined by real model of NPP operation. The results of numerical experiments on the basis of given model for various processes of thermal force loads and their comparison with experimental results are presented. (author)

  20. The Impact of Statistical Leakage Models on Design Yield Estimation

    Directory of Open Access Journals (Sweden)

    Rouwaida Kanj

    2011-01-01

    Full Text Available Device mismatch and process variation models play a key role in determining the functionality and yield of sub-100 nm design. Average characteristics are often of interest, such as the average leakage current or the average read delay. However, detecting rare functional fails is critical for memory design and designers often seek techniques that enable accurately modeling such events. Extremely leaky devices can inflict functionality fails. The plurality of leaky devices on a bitline increase the dimensionality of the yield estimation problem. Simplified models are possible by adopting approximations to the underlying sum of lognormals. The implications of such approximations on tail probabilities may in turn bias the yield estimate. We review different closed form approximations and compare against the CDF matching method, which is shown to be most effective method for accurate statistical leakage modeling.

  1. Genomic breeding value estimation using nonparametric additive regression models

    Directory of Open Access Journals (Sweden)

    Solberg Trygve

    2009-01-01

    Full Text Available Abstract Genomic selection refers to the use of genomewide dense markers for breeding value estimation and subsequently for selection. The main challenge of genomic breeding value estimation is the estimation of many effects from a limited number of observations. Bayesian methods have been proposed to successfully cope with these challenges. As an alternative class of models, non- and semiparametric models were recently introduced. The present study investigated the ability of nonparametric additive regression models to predict genomic breeding values. The genotypes were modelled for each marker or pair of flanking markers (i.e. the predictors separately. The nonparametric functions for the predictors were estimated simultaneously using additive model theory, applying a binomial kernel. The optimal degree of smoothing was determined by bootstrapping. A mutation-drift-balance simulation was carried out. The breeding values of the last generation (genotyped was predicted using data from the next last generation (genotyped and phenotyped. The results show moderate to high accuracies of the predicted breeding values. A determination of predictor specific degree of smoothing increased the accuracy.

  2. AMEM-ADL Polymer Migration Estimation Model User's Guide

    Science.gov (United States)

    The user's guide of the Arthur D. Little Polymer Migration Estimation Model (AMEM) provides the information on how the model estimates the fraction of a chemical additive that diffuses through polymeric matrices.

  3. Benefit Estimation Model for Tourist Spaceflights

    Science.gov (United States)

    Goehlich, Robert A.

    2003-01-01

    It is believed that the only potential means for significant reduction of the recurrent launch cost, which results in a stimulation of human space colonization, is to make the launcher reusable, to increase its reliability, and to make it suitable for new markets such as mass space tourism. But such space projects, that have long range aspects are very difficult to finance, because even politicians would like to see a reasonable benefit during their term in office, because they want to be able to explain this investment to the taxpayer. This forces planners to use benefit models instead of intuitive judgement to convince sceptical decision-makers to support new investments in space. Benefit models provide insights into complex relationships and force a better definition of goals. A new approach is introduced in the paper that allows to estimate the benefits to be expected from a new space venture. The main objective why humans should explore space is determined in this study to ``improve the quality of life''. This main objective is broken down in sub objectives, which can be analysed with respect to different interest groups. Such interest groups are the operator of a space transportation system, the passenger, and the government. For example, the operator is strongly interested in profit, while the passenger is mainly interested in amusement, while the government is primarily interested in self-esteem and prestige. This leads to different individual satisfactory levels, which are usable for the optimisation process of reusable launch vehicles.

  4. Health effects estimation for contaminated properties

    International Nuclear Information System (INIS)

    Marks, S.; Denham, D.H.; Cross, F.T.; Kennedy, W.E. Jr.

    1984-05-01

    As part of an overall remedial action program to evaluate the need for and institute actions designed to minimize health hazards from inactive tailings piles and from displaced tailings, methods for estimating health effects from tailings were developed and applied to the Salt Lake City area. 2 references, 2 tables

  5. Global parameter estimation for thermodynamic models of transcriptional regulation.

    Science.gov (United States)

    Suleimenov, Yerzhan; Ay, Ahmet; Samee, Md Abul Hassan; Dresch, Jacqueline M; Sinha, Saurabh; Arnosti, David N

    2013-07-15

    Deciphering the mechanisms involved in gene regulation holds the key to understanding the control of central biological processes, including human disease, population variation, and the evolution of morphological innovations. New experimental techniques including whole genome sequencing and transcriptome analysis have enabled comprehensive modeling approaches to study gene regulation. In many cases, it is useful to be able to assign biological significance to the inferred model parameters, but such interpretation should take into account features that affect these parameters, including model construction and sensitivity, the type of fitness calculation, and the effectiveness of parameter estimation. This last point is often neglected, as estimation methods are often selected for historical reasons or for computational ease. Here, we compare the performance of two parameter estimation techniques broadly representative of local and global approaches, namely, a quasi-Newton/Nelder-Mead simplex (QN/NMS) method and a covariance matrix adaptation-evolutionary strategy (CMA-ES) method. The estimation methods were applied to a set of thermodynamic models of gene transcription applied to regulatory elements active in the Drosophila embryo. Measuring overall fit, the global CMA-ES method performed significantly better than the local QN/NMS method on high quality data sets, but this difference was negligible on lower quality data sets with increased noise or on data sets simplified by stringent thresholding. Our results suggest that the choice of parameter estimation technique for evaluation of gene expression models depends both on quality of data, the nature of the models [again, remains to be established] and the aims of the modeling effort. Copyright © 2013 Elsevier Inc. All rights reserved.

  6. Analytical and numerical models for estimating the effect of exhaust ventilation on radon entry in houses with basements or crawl spaces

    International Nuclear Information System (INIS)

    Mowris, R.J.

    1986-08-01

    Mechanical exhaust ventilation systems are being installed in newer, energy-efficient houses and their operation can increase the indoor-outdoor pressure differences that drive soil gas and thus radon entry. This thesis presents simplified models for estimating the pressure driven flow of radon into houses with basements or crawl spaces, due to underpressures induced by indoor-outdoor temperature differences, wind, or exhaust ventilation. A two-dimensional finite difference model is presented and used to calculate the pressure field and soil gas flow rate into a basement situated in soil of uniform permeability. A simplified analytical model is compared to the finite difference model with generally very good agreement. Another simplified model is presented for houses with a crawl space. Literature on radon research is also reviewed to show why pressure driven flow of soil gas is considered to be the major source of radon entry in houses with higher-than-average indoor radon concentrations. Comparisons of measured vs. calculated indoor radon concentrations for a house with a basement showed the simplified basement model underpredicting on average by 25%. For a house with a crawl space the simplified crawl space model overpredicted by 23% when the crawl space vents are open and 48% when the crawl space vents are sealed

  7. Estimating the effects of wages on obesity.

    Science.gov (United States)

    Kim, DaeHwan; Leigh, John Paul

    2010-05-01

    To estimate the effects of wages on obesity and body mass. Data on household heads, aged 20 to 65 years, with full-time jobs, were drawn from the Panel Study of Income Dynamics for 2003 to 2007. The Panel Study of Income Dynamics is a nationally representative sample. Instrumental variables (IV) for wages were created using knowledge of computer software and state legal minimum wages. Least squares (linear regression) with corrected standard errors were used to estimate the equations. Statistical tests revealed both instruments were strong and tests for over-identifying restrictions were favorable. Wages were found to be predictive (P low wages increase obesity prevalence and body mass.

  8. Quantitative Estimation for the Effectiveness of Automation

    International Nuclear Information System (INIS)

    Lee, Seung Min; Seong, Poong Hyun

    2012-01-01

    In advanced MCR, various automation systems are applied to enhance the human performance and reduce the human errors in industrial fields. It is expected that automation provides greater efficiency, lower workload, and fewer human errors. However, these promises are not always fulfilled. As the new types of events related to application of the imperfect and complex automation are occurred, it is required to analyze the effects of automation system for the performance of human operators. Therefore, we suggest the quantitative estimation method to analyze the effectiveness of the automation systems according to Level of Automation (LOA) classification, which has been developed over 30 years. The estimation of the effectiveness of automation will be achieved by calculating the failure probability of human performance related to the cognitive activities

  9. Estimation Parameters And Modelling Zero Inflated Negative Binomial

    Directory of Open Access Journals (Sweden)

    Cindy Cahyaning Astuti

    2016-11-01

    Full Text Available Regression analysis is used to determine relationship between one or several response variable (Y with one or several predictor variables (X. Regression model between predictor variables and the Poisson distributed response variable is called Poisson Regression Model. Since, Poisson Regression requires an equality between mean and variance, it is not appropriate to apply this model on overdispersion (variance is higher than mean. Poisson regression model is commonly used to analyze the count data. On the count data type, it is often to encounteredd some observations that have zero value with large proportion of zero value on the response variable (zero Inflation. Poisson regression can be used to analyze count data but it has not been able to solve problem of excess zero value on the response variable. An alternative model which is more suitable for overdispersion data and can solve the problem of excess zero value on the response variable is Zero Inflated Negative Binomial (ZINB. In this research, ZINB is applied on the case of Tetanus Neonatorum in East Java. The aim of this research is to examine the likelihood function and to form an algorithm to estimate the parameter of ZINB and also applying ZINB model in the case of Tetanus Neonatorum in East Java. Maximum Likelihood Estimation (MLE method is used to estimate the parameter on ZINB and the likelihood function is maximized using Expectation Maximization (EM algorithm. Test results of ZINB regression model showed that the predictor variable have a partial significant effect at negative binomial model is the percentage of pregnant women visits and the percentage of maternal health personnel assisted, while the predictor variables that have a partial significant effect at zero inflation model is the percentage of neonatus visits.

  10. Hybrid Simulation Modeling to Estimate U.S. Energy Elasticities

    Science.gov (United States)

    Baylin-Stern, Adam C.

    This paper demonstrates how an U.S. application of CIMS, a technologically explicit and behaviourally realistic energy-economy simulation model which includes macro-economic feedbacks, can be used to derive estimates of elasticity of substitution (ESUB) and autonomous energy efficiency index (AEEI) parameters. The ability of economies to reduce greenhouse gas emissions depends on the potential for households and industry to decrease overall energy usage, and move from higher to lower emissions fuels. Energy economists commonly refer to ESUB estimates to understand the degree of responsiveness of various sectors of an economy, and use estimates to inform computable general equilibrium models used to study climate policies. Using CIMS, I have generated a set of future, 'pseudo-data' based on a series of simulations in which I vary energy and capital input prices over a wide range. I then used this data set to estimate the parameters for transcendental logarithmic production functions using regression techniques. From the production function parameter estimates, I calculated an array of elasticity of substitution values between input pairs. Additionally, this paper demonstrates how CIMS can be used to calculate price-independent changes in energy-efficiency in the form of the AEEI, by comparing energy consumption between technologically frozen and 'business as usual' simulations. The paper concludes with some ideas for model and methodological improvement, and how these might figure into future work in the estimation of ESUBs from CIMS. Keywords: Elasticity of substitution; hybrid energy-economy model; translog; autonomous energy efficiency index; rebound effect; fuel switching.

  11. [Effect of speech estimation on social anxiety].

    Science.gov (United States)

    Shirotsuki, Kentaro; Sasagawa, Satoko; Nomura, Shinobu

    2009-02-01

    This study investigates the effect of speech estimation on social anxiety to further understanding of this characteristic of Social Anxiety Disorder (SAD). In the first study, we developed the Speech Estimation Scale (SES) to assess negative estimation before giving a speech which has been reported to be the most fearful social situation in SAD. Undergraduate students (n = 306) completed a set of questionnaires, which consisted of the Short Fear of Negative Evaluation Scale (SFNE), the Social Interaction Anxiety Scale (SIAS), the Social Phobia Scale (SPS), and the SES. Exploratory factor analysis showed an adequate one-factor structure with eight items. Further analysis indicated that the SES had good reliability and validity. In the second study, undergraduate students (n = 315) completed the SFNE, SIAS, SPS, SES, and the Self-reported Depression Scale (SDS). The results of path analysis showed that fear of negative evaluation from others (FNE) predicted social anxiety, and speech estimation mediated the relationship between FNE and social anxiety. These results suggest that speech estimation might maintain SAD symptoms, and could be used as a specific target for cognitive intervention in SAD.

  12. Dynamic Diffusion Estimation in Exponential Family Models

    Czech Academy of Sciences Publication Activity Database

    Dedecius, Kamil; Sečkárová, Vladimíra

    2013-01-01

    Roč. 20, č. 11 (2013), s. 1114-1117 ISSN 1070-9908 R&D Projects: GA MŠk 7D12004; GA ČR GA13-13502S Keywords : diffusion estimation * distributed estimation * paremeter estimation Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 1.639, year: 2013 http://library.utia.cas.cz/separaty/2013/AS/dedecius-0396518.pdf

  13. UAV State Estimation Modeling Techniques in AHRS

    Science.gov (United States)

    Razali, Shikin; Zhahir, Amzari

    2017-11-01

    Autonomous unmanned aerial vehicle (UAV) system is depending on state estimation feedback to control flight operation. Estimation on the correct state improves navigation accuracy and achieves flight mission safely. One of the sensors configuration used in UAV state is Attitude Heading and Reference System (AHRS) with application of Extended Kalman Filter (EKF) or feedback controller. The results of these two different techniques in estimating UAV states in AHRS configuration are displayed through position and attitude graphs.

  14. Estimating scaled treatment effects with multiple outcomes.

    Science.gov (United States)

    Kennedy, Edward H; Kangovi, Shreya; Mitra, Nandita

    2017-01-01

    In classical study designs, the aim is often to learn about the effects of a treatment or intervention on a single outcome; in many modern studies, however, data on multiple outcomes are collected and it is of interest to explore effects on multiple outcomes simultaneously. Such designs can be particularly useful in patient-centered research, where different outcomes might be more or less important to different patients. In this paper, we propose scaled effect measures (via potential outcomes) that translate effects on multiple outcomes to a common scale, using mean-variance and median-interquartile range based standardizations. We present efficient, nonparametric, doubly robust methods for estimating these scaled effects (and weighted average summary measures), and for testing the null hypothesis that treatment affects all outcomes equally. We also discuss methods for exploring how treatment effects depend on covariates (i.e., effect modification). In addition to describing efficiency theory for our estimands and the asymptotic behavior of our estimators, we illustrate the methods in a simulation study and a data analysis. Importantly, and in contrast to much of the literature concerning effects on multiple outcomes, our methods are nonparametric and can be used not only in randomized trials to yield increased efficiency, but also in observational studies with high-dimensional covariates to reduce confounding bias.

  15. Estimating the Multilevel Rasch Model: With the lme4 Package

    Directory of Open Access Journals (Sweden)

    Harold Doran

    2007-02-01

    Full Text Available Traditional Rasch estimation of the item and student parameters via marginal maximum likelihood, joint maximum likelihood or conditional maximum likelihood, assume individuals in clustered settings are uncorrelated and items within a test that share a grouping structure are also uncorrelated. These assumptions are often violated, particularly in educational testing situations, in which students are grouped into classrooms and many test items share a common grouping structure, such as a content strand or a reading passage. Consequently, one possible approach is to explicitly recognize the clustered nature of the data and directly incorporate random effects to account for the various dependencies. This article demonstrates how the multilevel Rasch model can be estimated using the functions in R for mixed-effects models with crossed or partially crossed random effects. We demonstrate how to model the following hierarchical data structures: a individuals clustered in similar settings (e.g., classrooms, schools, b items nested within a particular group (such as a content strand or a reading passage, and c how to estimate a teacher × content strand interaction.

  16. Efficient estimation of an additive quantile regression model

    NARCIS (Netherlands)

    Cheng, Y.; de Gooijer, J.G.; Zerom, D.

    2011-01-01

    In this paper, two non-parametric estimators are proposed for estimating the components of an additive quantile regression model. The first estimator is a computationally convenient approach which can be viewed as a more viable alternative to existing kernel-based approaches. The second estimator

  17. Performances of some estimators of linear model with ...

    African Journals Online (AJOL)

    The estimators are compared by examing the finite properties of estimators namely; sum of biases, sum of absolute biases, sum of variances and sum of the mean squared error of the estimated parameter of the model. Results show that when the autocorrelation level is small (ρ=0.4), the MLGD estimator is best except when ...

  18. Dynamic models for estimating the effect of HAART on CD4 in observational studies: Application to the Aquitaine Cohort and the Swiss HIV Cohort Study.

    Science.gov (United States)

    Prague, Mélanie; Commenges, Daniel; Gran, Jon Michael; Ledergerber, Bruno; Young, Jim; Furrer, Hansjakob; Thiébaut, Rodolphe

    2017-03-01

    Highly active antiretroviral therapy (HAART) has proved efficient in increasing CD4 counts in many randomized clinical trials. Because randomized trials have some limitations (e.g., short duration, highly selected subjects), it is interesting to assess the effect of treatments using observational studies. This is challenging because treatment is started preferentially in subjects with severe conditions. This general problem had been treated using Marginal Structural Models (MSM) relying on the counterfactual formulation. Another approach to causality is based on dynamical models. We present three discrete-time dynamic models based on linear increments models (LIM): the first one based on one difference equation for CD4 counts, the second with an equilibrium point, and the third based on a system of two difference equations, which allows jointly modeling CD4 counts and viral load. We also consider continuous-time models based on ordinary differential equations with non-linear mixed effects (ODE-NLME). These mechanistic models allow incorporating biological knowledge when available, which leads to increased statistical evidence for detecting treatment effect. Because inference in ODE-NLME is numerically challenging and requires specific methods and softwares, LIM are a valuable intermediary option in terms of consistency, precision, and complexity. We compare the different approaches in simulation and in illustration on the ANRS CO3 Aquitaine Cohort and the Swiss HIV Cohort Study. © 2016, The International Biometric Society.

  19. On population size estimators in the Poisson mixture model.

    Science.gov (United States)

    Mao, Chang Xuan; Yang, Nan; Zhong, Jinhua

    2013-09-01

    Estimating population sizes via capture-recapture experiments has enormous applications. The Poisson mixture model can be adopted for those applications with a single list in which individuals appear one or more times. We compare several nonparametric estimators, including the Chao estimator, the Zelterman estimator, two jackknife estimators and the bootstrap estimator. The target parameter of the Chao estimator is a lower bound of the population size. Those of the other four estimators are not lower bounds, and they may produce lower confidence limits for the population size with poor coverage probabilities. A simulation study is reported and two examples are investigated. © 2013, The International Biometric Society.

  20. Parameter Estimates in Differential Equation Models for Chemical Kinetics

    Science.gov (United States)

    Winkel, Brian

    2011-01-01

    We discuss the need for devoting time in differential equations courses to modelling and the completion of the modelling process with efforts to estimate the parameters in the models using data. We estimate the parameters present in several differential equation models of chemical reactions of order n, where n = 0, 1, 2, and apply more general…

  1. Radiation risk estimation based on measurement error models

    CERN Document Server

    Masiuk, Sergii; Shklyar, Sergiy; Chepurny, Mykola; Likhtarov, Illya

    2017-01-01

    This monograph discusses statistics and risk estimates applied to radiation damage under the presence of measurement errors. The first part covers nonlinear measurement error models, with a particular emphasis on efficiency of regression parameter estimators. In the second part, risk estimation in models with measurement errors is considered. Efficiency of the methods presented is verified using data from radio-epidemiological studies.

  2. Efficient estimation of semiparametric copula models for bivariate survival data

    KAUST Repository

    Cheng, Guang

    2014-01-01

    A semiparametric copula model for bivariate survival data is characterized by a parametric copula model of dependence and nonparametric models of two marginal survival functions. Efficient estimation for the semiparametric copula model has been recently studied for the complete data case. When the survival data are censored, semiparametric efficient estimation has only been considered for some specific copula models such as the Gaussian copulas. In this paper, we obtain the semiparametric efficiency bound and efficient estimation for general semiparametric copula models for possibly censored data. We construct an approximate maximum likelihood estimator by approximating the log baseline hazard functions with spline functions. We show that our estimates of the copula dependence parameter and the survival functions are asymptotically normal and efficient. Simple consistent covariance estimators are also provided. Numerical results are used to illustrate the finite sample performance of the proposed estimators. © 2013 Elsevier Inc.

  3. Efficient nonparametric estimation of causal mediation effects

    OpenAIRE

    Chan, K. C. G.; Imai, K.; Yam, S. C. P.; Zhang, Z.

    2016-01-01

    An essential goal of program evaluation and scientific research is the investigation of causal mechanisms. Over the past several decades, causal mediation analysis has been used in medical and social sciences to decompose the treatment effect into the natural direct and indirect effects. However, all of the existing mediation analysis methods rely on parametric modeling assumptions in one way or another, typically requiring researchers to specify multiple regression models involving the treat...

  4. Evaluation of black carbon estimations in global aerosol models

    Directory of Open Access Journals (Sweden)

    Y. Zhao

    2009-11-01

    Full Text Available We evaluate black carbon (BC model predictions from the AeroCom model intercomparison project by considering the diversity among year 2000 model simulations and comparing model predictions with available measurements. These model-measurement intercomparisons include BC surface and aircraft concentrations, aerosol absorption optical depth (AAOD retrievals from AERONET and Ozone Monitoring Instrument (OMI and BC column estimations based on AERONET. In regions other than Asia, most models are biased high compared to surface concentration measurements. However compared with (column AAOD or BC burden retreivals, the models are generally biased low. The average ratio of model to retrieved AAOD is less than 0.7 in South American and 0.6 in African biomass burning regions; both of these regions lack surface concentration measurements. In Asia the average model to observed ratio is 0.7 for AAOD and 0.5 for BC surface concentrations. Compared with aircraft measurements over the Americas at latitudes between 0 and 50N, the average model is a factor of 8 larger than observed, and most models exceed the measured BC standard deviation in the mid to upper troposphere. At higher latitudes the average model to aircraft BC ratio is 0.4 and models underestimate the observed BC loading in the lower and middle troposphere associated with springtime Arctic haze. Low model bias for AAOD but overestimation of surface and upper atmospheric BC concentrations at lower latitudes suggests that most models are underestimating BC absorption and should improve estimates for refractive index, particle size, and optical effects of BC coating. Retrieval uncertainties and/or differences with model diagnostic treatment may also contribute to the model-measurement disparity. Largest AeroCom model diversity occurred in northern Eurasia and the remote Arctic, regions influenced by anthropogenic sources. Changing emissions, aging, removal, or optical properties within a single model

  5. Effects of in-situ and reanalysis climate data on estimation of cropland gross primary production using the Vegetation Photosynthesis Model

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Cui; Xiao, Xiangming; Wagle, Pradeep; Griffis, Timothy; Dong, Jinwei; Wu, Chaoyang; Qin, Yuanwei; Cook, David R.

    2015-11-01

    Satellite-based Production Efficiency Models (PEMs) often require meteorological reanalysis data such as the North America Regional Reanalysis (NARR) by the National Centers for Environmental Prediction (NCEP) as model inputs to simulate Gross Primary Production (GPP) at regional and global scales. This study first evaluated the accuracies of air temperature (TNARR) and downward shortwave radiation (RNARR) of the NARR by comparing with in-situ meteorological measurements at 37 AmeriFlux non-crop eddy flux sites, then used one PEM – the Vegetation Photosynthesis Model (VPM) to simulate 8-day mean GPP (GPPVPM) at seven AmeriFlux crop sites, and investigated the uncertainties in GPPVPM from climate inputs as compared with eddy covariance-based GPP (GPPEC). Results showed that TNARR agreed well with in-situ measurements; RNARR, however, was positively biased. An empirical linear correction was applied to RNARR, and significantly reduced the relative error of RNARR by ~25% for crop site-years. Overall, GPPVPM calculated from the in-situ (GPPVPM(EC)), original (GPPVPM(NARR)) and adjusted NARR (GPPVPM(adjNARR)) climate data tracked the seasonality of GPPEC well, albeit with different degrees of biases. GPPVPM(EC) showed a good match with GPPEC for maize (Zea mays L.), but was slightly underestimated for soybean (Glycine max L.). Replacing the in-situ climate data with the NARR resulted in a significant overestimation of GPPVPM(NARR) (18.4/29.6% for irrigated/rainfed maize and 12.7/12.5% for irrigated/rainfed soybean). GPPVPM(adjNARR) showed a good agreement with GPPVPM(EC) for both crops due to the reduction in the bias of RNARR. The results imply that the bias of RNARR introduced significant uncertainties into the PEM-based GPP estimates, suggesting that more accurate surface radiation datasets are needed to estimate primary production of terrestrial ecosystems at regional and global scales.

  6. Mathematical model of transmission network static state estimation

    Directory of Open Access Journals (Sweden)

    Ivanov Aleksandar

    2012-01-01

    Full Text Available In this paper the characteristics and capabilities of the power transmission network static state estimator are presented. The solving process of the mathematical model containing the measurement errors and their processing is developed. To evaluate difference between the general model of state estimation and the fast decoupled state estimation model, the both models are applied to an example, and so derived results are compared.

  7. Estimation of effective thermal conductivity tensor from composite microstructure images

    International Nuclear Information System (INIS)

    Thomas, M; Boyard, N; Jarny, Y; Delaunay, D

    2008-01-01

    The determination of the effective thermal properties of inhomogeneous materials is a long-standing problem of continuously interest. The impressive number of methods developed to measure or estimate the thermal properties of composite materials clearly exhibits the importance given to their knowledge. Homogenization models are a cheap way to determine or predict them. Many different approaches of homogenization were developed, but the last advances are credited to numerical methods. In this study, a new computational model is developed to estimate the 2D thermal conductivity tensor and the thermal main directions of a pure carbon/epoxy unidirectional composite. This tool is based on real composite microstructure.

  8. Parameter Estimation of Partial Differential Equation Models

    KAUST Repository

    Xun, Xiaolei; Cao, Jiguo; Mallick, Bani; Maity, Arnab; Carroll, Raymond J.

    2013-01-01

    PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus

  9. Estimating linear effects in ANOVA designs: the easy way.

    Science.gov (United States)

    Pinhas, Michal; Tzelgov, Joseph; Ganor-Stern, Dana

    2012-09-01

    Research in cognitive science has documented numerous phenomena that are approximated by linear relationships. In the domain of numerical cognition, the use of linear regression for estimating linear effects (e.g., distance and SNARC effects) became common following Fias, Brysbaert, Geypens, and d'Ydewalle's (1996) study on the SNARC effect. While their work has become the model for analyzing linear effects in the field, it requires statistical analysis of individual participants and does not provide measures of the proportions of variability accounted for (cf. Lorch & Myers, 1990). In the present methodological note, using both the distance and SNARC effects as examples, we demonstrate how linear effects can be estimated in a simple way within the framework of repeated measures analysis of variance. This method allows for estimating effect sizes in terms of both slope and proportions of variability accounted for. Finally, we show that our method can easily be extended to estimate linear interaction effects, not just linear effects calculated as main effects.

  10. A Probabilistic Cost Estimation Model for Unexploded Ordnance Removal

    National Research Council Canada - National Science Library

    Poppe, Peter

    1999-01-01

    ...) contaminated sites that the services must decontaminate. Existing models for estimating the cost of UXO removal often require a high level of expertise and provide only a point estimate for the costs...

  11. Methodology development for the radioecological monitoring effectiveness estimation

    International Nuclear Information System (INIS)

    Gusev, A.E.; Kozlov, A.A.; Lavrov, K.N.; Sobolev, I.A.; Tsyplyakova, T.P.

    1997-01-01

    A general model for estimation of the programs assuring radiation and ecological public protection is described. The complex of purposes and criteria characterizing and giving an opportunity to estimate the effectiveness of environment protection program composition is selected. An algorithm for selecting the optimal management decision from the view point of work cost connected with population protection improvement is considered. The position of radiation-ecological monitoring in general problem of environment pollution is determined. It is shown that the monitoring organizing effectiveness is closely connected with population radiation and ecological protection

  12. Estimation of Stochastic Volatility Models by Nonparametric Filtering

    DEFF Research Database (Denmark)

    Kanaya, Shin; Kristensen, Dennis

    2016-01-01

    /estimated volatility process replacing the latent process. Our estimation strategy is applicable to both parametric and nonparametric stochastic volatility models, and can handle both jumps and market microstructure noise. The resulting estimators of the stochastic volatility model will carry additional biases...... and variances due to the first-step estimation, but under regularity conditions we show that these vanish asymptotically and our estimators inherit the asymptotic properties of the infeasible estimators based on observations of the volatility process. A simulation study examines the finite-sample properties...

  13. Volatility estimation using a rational GARCH model

    Directory of Open Access Journals (Sweden)

    Tetsuya Takaishi

    2018-03-01

    Full Text Available The rational GARCH (RGARCH model has been proposed as an alternative GARCHmodel that captures the asymmetric property of volatility. In addition to the previously proposedRGARCH model, we propose an alternative RGARCH model called the RGARCH-Exp model thatis more stable when dealing with outliers. We measure the performance of the volatility estimationby a loss function calculated using realized volatility as a proxy for true volatility and compare theRGARCH-type models with other asymmetric type models such as the EGARCH and GJR models.We conduct empirical studies of six stocks on the Tokyo Stock Exchange and find that a volatilityestimation using the RGARCH-type models outperforms the GARCH model and is comparable toother asymmetric GARCH models.

  14. Averaging models: parameters estimation with the R-Average procedure

    Directory of Open Access Journals (Sweden)

    S. Noventa

    2010-01-01

    Full Text Available The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982, can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto & Vicentini, 2007 can be used to estimate the parameters of these models. By the use of multiple information criteria in the model selection procedure, R-Average allows for the identification of the best subset of parameters that account for the data. After a review of the general method, we present an implementation of the procedure in the framework of R-project, followed by some experiments using a Monte Carlo method.

  15. Bayesian estimation of dose rate effectiveness

    International Nuclear Information System (INIS)

    Arnish, J.J.; Groer, P.G.

    2000-01-01

    A Bayesian statistical method was used to quantify the effectiveness of high dose rate 137 Cs gamma radiation at inducing fatal mammary tumours and increasing the overall mortality rate in BALB/c female mice. The Bayesian approach considers both the temporal and dose dependence of radiation carcinogenesis and total mortality. This paper provides the first direct estimation of dose rate effectiveness using Bayesian statistics. This statistical approach provides a quantitative description of the uncertainty of the factor characterising the dose rate in terms of a probability density function. The results show that a fixed dose from 137 Cs gamma radiation delivered at a high dose rate is more effective at inducing fatal mammary tumours and increasing the overall mortality rate in BALB/c female mice than the same dose delivered at a low dose rate. (author)

  16. Health effects estimation code development for accident consequence analysis

    International Nuclear Information System (INIS)

    Togawa, O.; Homma, T.

    1992-01-01

    As part of a computer code system for nuclear reactor accident consequence analysis, two computer codes have been developed for estimating health effects expected to occur following an accident. Health effects models used in the codes are based on the models of NUREG/CR-4214 and are revised for the Japanese population on the basis of the data from the reassessment of the radiation dosimetry and information derived from epidemiological studies on atomic bomb survivors of Hiroshima and Nagasaki. The health effects models include early and continuing effects, late somatic effects and genetic effects. The values of some model parameters are revised for early mortality. The models are modified for predicting late somatic effects such as leukemia and various kinds of cancers. The models for genetic effects are the same as those of NUREG. In order to test the performance of one of these codes, it is applied to the U.S. and Japanese populations. This paper provides descriptions of health effects models used in the two codes and gives comparisons of the mortality risks from each type of cancer for the two populations. (author)

  17. [Estimation of the effect derived from wind erosion of soil and dust emission in Tianjin suburbs on the central district based on WEPS model].

    Science.gov (United States)

    Chen, Li; Han, Ting-Ting; Li, Tao; Ji, Ya-Qin; Bai, Zhi-Peng; Wang, Bin

    2012-07-01

    Due to the lack of a prediction model for current wind erosion in China and the slow development for such models, this study aims to predict the wind erosion of soil and the dust emission and develop a prediction model for wind erosion in Tianjin by investigating the structure, parameter systems and the relationships among the parameter systems of the prediction models for wind erosion in typical areas, using the U.S. wind erosion prediction system (WEPS) as reference. Based on the remote sensing technique and the test data, a parameter system was established for the prediction model of wind erosion and dust emission, and a model was developed that was suitable for the prediction of wind erosion and dust emission in Tianjin. Tianjin was divided into 11 080 blocks with a resolution of 1 x 1 km2, among which 7 778 dust emitting blocks were selected. The parameters of the blocks were localized, including longitude, latitude, elevation and direction, etc.. The database files of blocks were localized, including wind file, climate file, soil file and management file. The weps. run file was edited. Based on Microsoft Visualstudio 2008, secondary development was done using C + + language, and the dust fluxes of 7 778 blocks were estimated, including creep and saltation fluxes, suspension fluxes and PM10 fluxes. Based on the parameters of wind tunnel experiments in Inner Mongolia, the soil measurement data and climate data in suburbs of Tianjin, the wind erosion module, wind erosion fluxes, dust emission release modulus and dust release fluxes were calculated for the four seasons and the whole year in suburbs of Tianjin. In 2009, the total creep and saltation fluxes, suspension fluxes and PM10 fluxes in the suburbs of Tianjin were 2.54 x 10(6) t, 1.25 x 10(7) t and 9.04 x 10(5) t, respectively, among which, the parts pointing to the central district were 5.61 x 10(5) t, 2.89 x 10(6) t and 2.03 x 10(5) t, respectively.

  18. Lag space estimation in time series modelling

    DEFF Research Database (Denmark)

    Goutte, Cyril

    1997-01-01

    The purpose of this article is to investigate some techniques for finding the relevant lag-space, i.e. input information, for time series modelling. This is an important aspect of time series modelling, as it conditions the design of the model through the regressor vector a.k.a. the input layer...

  19. Revised nonstochastic health effects models

    International Nuclear Information System (INIS)

    Yaniv, S.S.; Scott, B.R.

    1991-01-01

    In 1989, the U.S. Nuclear Regulatory Commission (NRC) published a revision of the 1985 report, Health Effects Models for Nuclear Power Plant Accident Consequence Analysis, NUREG/CR-4214, that included models for early occurring and continuing nonstochastic effects, cancers and thyroid nodules, and genetic effects. This paper discusses specific models for lethality from early occurring and continuing effects. For brevity, hematopoietic-syndrome lethality is called hematopoietic death; pulmonary-syndrome lethality is called pulmonary death; and gastrointestinal syndrome lethality is called gastrointestinal death. Two-parameter Weibull risk functions are recommended for estimating the risk of hematopoietic, pulmonary, or gastrointestinal death. The risks are obtained indirectly by using hazard functions; as a result, this type of approach has been called hazard-function modeling and the models generated are called hazard-function models. In the 1989 NUREG/CR-4214 report, changes were made in the parameter values for a number of effects, and the models used to estimate hematopoietic and pulmonary deaths were substantially revised. Upper and lower estimates of model parameters are provided for all early health effects models. In this paper, we discuss the 1989 models for hematopoietic and pulmonary deaths, highlighting the differences between the 1989 and 1985 models. In addition, we give the reasons for which the 1985 models were modified

  20. Model parameters estimation and sensitivity by genetic algorithms

    International Nuclear Information System (INIS)

    Marseguerra, Marzio; Zio, Enrico; Podofillini, Luca

    2003-01-01

    In this paper we illustrate the possibility of extracting qualitative information on the importance of the parameters of a model in the course of a Genetic Algorithms (GAs) optimization procedure for the estimation of such parameters. The Genetic Algorithms' search of the optimal solution is performed according to procedures that resemble those of natural selection and genetics: an initial population of alternative solutions evolves within the search space through the four fundamental operations of parent selection, crossover, replacement, and mutation. During the search, the algorithm examines a large amount of solution points which possibly carries relevant information on the underlying model characteristics. A possible utilization of this information amounts to create and update an archive with the set of best solutions found at each generation and then to analyze the evolution of the statistics of the archive along the successive generations. From this analysis one can retrieve information regarding the speed of convergence and stabilization of the different control (decision) variables of the optimization problem. In this work we analyze the evolution strategy followed by a GA in its search for the optimal solution with the aim of extracting information on the importance of the control (decision) variables of the optimization with respect to the sensitivity of the objective function. The study refers to a GA search for optimal estimates of the effective parameters in a lumped nuclear reactor model of literature. The supporting observation is that, as most optimization procedures do, the GA search evolves towards convergence in such a way to stabilize first the most important parameters of the model and later those which influence little the model outputs. In this sense, besides estimating efficiently the parameters values, the optimization approach also allows us to provide a qualitative ranking of their importance in contributing to the model output. The

  1. Extreme Quantile Estimation in Binary Response Models

    Science.gov (United States)

    1990-03-01

    in Cancer Research," Biometria , VoL 66, pp. 307-316. Hsi, B.P. [1969], ’The Multiple Sample Up-and-Down Method in Bioassay," Journal of the American...New Method of Estimation," Biometria , VoL 53, pp. 439-454. Wetherill, G.B. [1976], Sequential Methods in Statistics, London: Chapman and Hall. Wu, C.FJ

  2. Robust Estimation and Forecasting of the Capital Asset Pricing Model

    NARCIS (Netherlands)

    G. Bian (Guorui); M.J. McAleer (Michael); W.-K. Wong (Wing-Keung)

    2013-01-01

    textabstractIn this paper, we develop a modified maximum likelihood (MML) estimator for the multiple linear regression model with underlying student t distribution. We obtain the closed form of the estimators, derive the asymptotic properties, and demonstrate that the MML estimator is more

  3. Robust Estimation and Forecasting of the Capital Asset Pricing Model

    NARCIS (Netherlands)

    G. Bian (Guorui); M.J. McAleer (Michael); W.-K. Wong (Wing-Keung)

    2010-01-01

    textabstractIn this paper, we develop a modified maximum likelihood (MML) estimator for the multiple linear regression model with underlying student t distribution. We obtain the closed form of the estimators, derive the asymptotic properties, and demonstrate that the MML estimator is more

  4. Efficient estimation of an additive quantile regression model

    NARCIS (Netherlands)

    Cheng, Y.; de Gooijer, J.G.; Zerom, D.

    2009-01-01

    In this paper two kernel-based nonparametric estimators are proposed for estimating the components of an additive quantile regression model. The first estimator is a computationally convenient approach which can be viewed as a viable alternative to the method of De Gooijer and Zerom (2003). By

  5. Efficient estimation of an additive quantile regression model

    NARCIS (Netherlands)

    Cheng, Y.; de Gooijer, J.G.; Zerom, D.

    2010-01-01

    In this paper two kernel-based nonparametric estimators are proposed for estimating the components of an additive quantile regression model. The first estimator is a computationally convenient approach which can be viewed as a viable alternative to the method of De Gooijer and Zerom (2003). By

  6. Performances Of Estimators Of Linear Models With Autocorrelated ...

    African Journals Online (AJOL)

    The performances of five estimators of linear models with Autocorrelated error terms are compared when the independent variable is autoregressive. The results reveal that the properties of the estimators when the sample size is finite is quite similar to the properties of the estimators when the sample size is infinite although ...

  7. Crash data modeling with a generalized estimator.

    Science.gov (United States)

    Ye, Zhirui; Xu, Yueru; Lord, Dominique

    2018-05-11

    The investigation of relationships between traffic crashes and relevant factors is important in traffic safety management. Various methods have been developed for modeling crash data. In real world scenarios, crash data often display the characteristics of over-dispersion. However, on occasions, some crash datasets have exhibited under-dispersion, especially in cases where the data are conditioned upon the mean. The commonly used models (such as the Poisson and the NB regression models) have associated limitations to cope with various degrees of dispersion. In light of this, a generalized event count (GEC) model, which can be generally used to handle over-, equi-, and under-dispersed data, is proposed in this study. This model was first applied to case studies using data from Toronto, characterized by over-dispersion, and then to crash data from railway-highway crossings in Korea, characterized with under-dispersion. The results from the GEC model were compared with those from the Negative binomial and the hyper-Poisson models. The cases studies show that the proposed model provides good performance for crash data characterized with over- and under-dispersion. Moreover, the proposed model simplifies the modeling process and the prediction of crash data. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. Adult head CT scans: the uncertainties of effective dose estimates

    International Nuclear Information System (INIS)

    Gregory, Kent J.; Bibbo, Giovanni; Pattison, John E.

    2008-01-01

    sizes and positions within patients, and advances in CT scanner design that have not been taken into account by the effective dose estimation methods. The analysis excludes uncertainties due to variation in patient head size and the size of the model heads. For each of the four dose estimation methods analysed, the smallest and largest uncertainties (stated at the 95% confidence interval) were; 20-31% (Nagel), 14-28% (ImpaCT), 20-36% (Wellhoefer) and 21-32% (DLP). In each case, the smallest dose estimate uncertainties apply when the CT Dose Index for the scanner has been measured. In general, each of the four methods provide reasonable estimates of effective dose from head CT scans, with the ImpaCT method having the marginally smaller uncertainties. This uncertainty analysis method may be applied to other types of CT scans, such as chest, abdomen and pelvis studies, and may reveal where improvements can be made to reduce the uncertainty of those effective dose estimates. As identified in the BEIR VII report (2006), improvement in the uncertainty of effective dose estimates for individuals is expected to lead to a greater understanding of the hazards posed by diagnostic radiation exposures. (author)

  9. Modeling and Parameter Estimation of a Small Wind Generation System

    Directory of Open Access Journals (Sweden)

    Carlos A. Ramírez Gómez

    2013-11-01

    Full Text Available The modeling and parameter estimation of a small wind generation system is presented in this paper. The system consists of a wind turbine, a permanent magnet synchronous generator, a three phase rectifier, and a direct current load. In order to estimate the parameters wind speed data are registered in a weather station located in the Fraternidad Campus at ITM. Wind speed data were applied to a reference model programed with PSIM software. From that simulation, variables were registered to estimate the parameters. The wind generation system model together with the estimated parameters is an excellent representation of the detailed model, but the estimated model offers a higher flexibility than the programed model in PSIM software.

  10. Applicability of models to estimate traffic noise for urban roads.

    Science.gov (United States)

    Melo, Ricardo A; Pimentel, Roberto L; Lacerda, Diego M; Silva, Wekisley M

    2015-01-01

    Traffic noise is a highly relevant environmental impact in cities. Models to estimate traffic noise, in turn, can be useful tools to guide mitigation measures. In this paper, the applicability of models to estimate noise levels produced by a continuous flow of vehicles on urban roads is investigated. The aim is to identify which models are more appropriate to estimate traffic noise in urban areas since several models available were conceived to estimate noise from highway traffic. First, measurements of traffic noise, vehicle count and speed were carried out in five arterial urban roads of a brazilian city. Together with geometric measurements of width of lanes and distance from noise meter to lanes, these data were input in several models to estimate traffic noise. The predicted noise levels were then compared to the respective measured counterparts for each road investigated. In addition, a chart showing mean differences in noise between estimations and measurements is presented, to evaluate the overall performance of the models. Measured Leq values varied from 69 to 79 dB(A) for traffic flows varying from 1618 to 5220 vehicles/h. Mean noise level differences between estimations and measurements for all urban roads investigated ranged from -3.5 to 5.5 dB(A). According to the results, deficiencies of some models are discussed while other models are identified as applicable to noise estimations on urban roads in a condition of continuous flow. Key issues to apply such models to urban roads are highlighted.

  11. Risk estimates for the health effects of alpha radiation

    International Nuclear Information System (INIS)

    Thomas, D.C.; McNeill, K.G.

    1981-09-01

    This report provides risk estimates for various health effects of alpha radiation. Human and animal data have been used to characterize the shapes of dose-response relations and the effects of various modifying factors, but quantitative risk estimates are based solely on human data: for lung cancer, on miners in the Colorado plateau, Czechoslovakia, Sweden, Ontario and Newfoundland; for bone and head cancers, on radium dial painters and radium-injected patients. Slopes of dose-response relations for lung cancer show a tendency to decrease with increasing dose. Linear extrapolation is unlikely to underestimate the excess risk at low doses by more than a factor of l.5. Under the linear cell-killing model, our best estimate

  12. Time improvement of photoelectric effect calculation for absorbed dose estimation

    International Nuclear Information System (INIS)

    Massa, J M; Wainschenker, R S; Doorn, J H; Caselli, E E

    2007-01-01

    Ionizing radiation therapy is a very useful tool in cancer treatment. It is very important to determine absorbed dose in human tissue to accomplish an effective treatment. A mathematical model based on affected areas is the most suitable tool to estimate the absorbed dose. Lately, Monte Carlo based techniques have become the most reliable, but they are time expensive. Absorbed dose calculating programs using different strategies have to choose between estimation quality and calculating time. This paper describes an optimized method for the photoelectron polar angle calculation in photoelectric effect, which is significant to estimate deposited energy in human tissue. In the case studies, time cost reduction nearly reached 86%, meaning that the time needed to do the calculation is approximately 1/7 th of the non optimized approach. This has been done keeping precision invariant

  13. Lagrangian speckle model and tissue-motion estimation--theory.

    Science.gov (United States)

    Maurice, R L; Bertrand, M

    1999-07-01

    It is known that when a tissue is subjected to movements such as rotation, shearing, scaling, etc., changes in speckle patterns that result act as a noise source, often responsible for most of the displacement-estimate variance. From a modeling point of view, these changes can be thought of as resulting from two mechanisms: one is the motion of the speckles and the other, the alterations of their morphology. In this paper, we propose a new tissue-motion estimator to counteract these speckle decorrelation effects. The estimator is based on a Lagrangian description of the speckle motion. This description allows us to follow local characteristics of the speckle field as if they were a material property. This method leads to an analytical description of the decorrelation in a way which enables the derivation of an appropriate inverse filter for speckle restoration. The filter is appropriate for linear geometrical transformation of the scattering function (LT), i.e., a constant-strain region of interest (ROI). As the LT itself is a parameter of the filter, a tissue-motion estimator can be formulated as a nonlinear minimization problem, seeking the best match between the pre-tissue-motion image and a restored-speckle post-motion image. The method is tested, using simulated radio-frequency (RF) images of tissue undergoing axial shear.

  14. Use of econometric models to estimate expenditure shares.

    Science.gov (United States)

    Trogdon, Justin G; Finkelstein, Eric A; Hoerger, Thomas J

    2008-08-01

    To investigate the use of regression models to calculate disease-specific shares of medical expenditures. Medical Expenditure Panel Survey (MEPS), 2000-2003. Theoretical investigation and secondary data analysis. Condition files used to define the presence of 10 medical conditions. Incremental effects of conditions on expenditures, expressed as a fraction of total expenditures, cannot generally be interpreted as shares. When the presence of one condition increases treatment costs for another condition, summing condition-specific shares leads to double-counting of expenditures. Condition-specific shares generated from multiplicative models should not be summed. We provide an algorithm that allows estimates based on these models to be interpreted as shares and summed across conditions.

  15. Conditional density estimation using fuzzy GARCH models

    NARCIS (Netherlands)

    Almeida, R.J.; Bastürk, N.; Kaymak, U.; Costa Sousa, da J.M.; Kruse, R.; Berthold, M.R.; Moewes, C.; Gil, M.A.; Grzegorzewski, P.; Hryniewicz, O.

    2013-01-01

    Abstract. Time series data exhibits complex behavior including non-linearity and path-dependency. This paper proposes a flexible fuzzy GARCH model that can capture different properties of data, such as skewness, fat tails and multimodality in one single model. Furthermore, additional information and

  16. Integrated traffic conflict model for estimating crash modification factors.

    Science.gov (United States)

    Shahdah, Usama; Saccomanno, Frank; Persaud, Bhagwant

    2014-10-01

    Crash modification factors (CMFs) for road safety treatments are usually obtained through observational models based on reported crashes. Observational Bayesian before-and-after methods have been applied to obtain more precise estimates of CMFs by accounting for the regression-to-the-mean bias inherent in naive methods. However, sufficient crash data reported over an extended period of time are needed to provide reliable estimates of treatment effects, a requirement that can be a challenge for certain types of treatment. In addition, these studies require that sites analyzed actually receive the treatment to which the CMF pertains. Another key issue with observational approaches is that they are not causal in nature, and as such, cannot provide a sound "behavioral" rationale for the treatment effect. Surrogate safety measures based on high risk vehicle interactions and traffic conflicts have been proposed to address this issue by providing a more "causal perspective" on lack of safety for different road and traffic conditions. The traffic conflict approach has been criticized, however, for lacking a formal link to observed and verified crashes, a difficulty that this paper attempts to resolve by presenting and investigating an alternative approach for estimating CMFs using simulated conflicts that are linked formally to observed crashes. The integrated CMF estimates are compared to estimates from an empirical Bayes (EB) crash-based before-and-after analysis for the same sample of treatment sites. The treatment considered involves changing left turn signal priority at Toronto signalized intersections from permissive to protected-permissive. The results are promising in that the proposed integrated method yields CMFs that closely match those obtained from the crash-based EB before-and-after analysis. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. LBM estimation of thermal conductivity in meso-scale modelling

    International Nuclear Information System (INIS)

    Grucelski, A

    2016-01-01

    Recently, there is a growing engineering interest in more rigorous prediction of effective transport coefficients for multicomponent, geometrically complex materials. We present main assumptions and constituents of the meso-scale model for the simulation of the coal or biomass devolatilisation with the Lattice Boltzmann method. For the results, the estimated values of the thermal conductivity coefficient of coal (solids), pyrolytic gases and air matrix are presented for a non-steady state with account for chemical reactions in fluid flow and heat transfer. (paper)

  18. A Descriptive Evaluation of Automated Software Cost-Estimation Models,

    Science.gov (United States)

    1986-10-01

    Version 1.03D) * PCOC (Version 7.01) - PRICE S • SLIM (Version 1.1) • SoftCost (Version 5. 1) * SPQR /20 (Version 1. 1) - WICOMO (Version 1.3) These...produce detailed GANTT and PERT charts. SPQR /20 is based on a cost model developed at ITT. In addition to cost, schedule, and staffing estimates, it...cases and test runs required, and the effectiveness of pre-test and test activities. SPQR /20 also predicts enhancement and maintenance activities. C

  19. Estimation of a multivariate mean under model selection uncertainty

    Directory of Open Access Journals (Sweden)

    Georges Nguefack-Tsague

    2014-05-01

    Full Text Available Model selection uncertainty would occur if we selected a model based on one data set and subsequently applied it for statistical inferences, because the "correct" model would not be selected with certainty.  When the selection and inference are based on the same dataset, some additional problems arise due to the correlation of the two stages (selection and inference. In this paper model selection uncertainty is considered and model averaging is proposed. The proposal is related to the theory of James and Stein of estimating more than three parameters from independent normal observations. We suggest that a model averaging scheme taking into account the selection procedure could be more appropriate than model selection alone. Some properties of this model averaging estimator are investigated; in particular we show using Stein's results that it is a minimax estimator and can outperform Stein-type estimators.

  20. A simulation of water pollution model parameter estimation

    Science.gov (United States)

    Kibler, J. F.

    1976-01-01

    A parameter estimation procedure for a water pollution transport model is elaborated. A two-dimensional instantaneous-release shear-diffusion model serves as representative of a simple transport process. Pollution concentration levels are arrived at via modeling of a remote-sensing system. The remote-sensed data are simulated by adding Gaussian noise to the concentration level values generated via the transport model. Model parameters are estimated from the simulated data using a least-squares batch processor. Resolution, sensor array size, and number and location of sensor readings can be found from the accuracies of the parameter estimates.

  1. Estimating High-Dimensional Time Series Models

    DEFF Research Database (Denmark)

    Medeiros, Marcelo C.; Mendes, Eduardo F.

    We study the asymptotic properties of the Adaptive LASSO (adaLASSO) in sparse, high-dimensional, linear time-series models. We assume both the number of covariates in the model and candidate variables can increase with the number of observations and the number of candidate variables is, possibly......, larger than the number of observations. We show the adaLASSO consistently chooses the relevant variables as the number of observations increases (model selection consistency), and has the oracle property, even when the errors are non-Gaussian and conditionally heteroskedastic. A simulation study shows...

  2. Optimal covariance selection for estimation using graphical models

    OpenAIRE

    Vichik, Sergey; Oshman, Yaakov

    2011-01-01

    We consider a problem encountered when trying to estimate a Gaussian random field using a distributed estimation approach based on Gaussian graphical models. Because of constraints imposed by estimation tools used in Gaussian graphical models, the a priori covariance of the random field is constrained to embed conditional independence constraints among a significant number of variables. The problem is, then: given the (unconstrained) a priori covariance of the random field, and the conditiona...

  3. Temporal rainfall estimation using input data reduction and model inversion

    Science.gov (United States)

    Wright, A. J.; Vrugt, J. A.; Walker, J. P.; Pauwels, V. R. N.

    2016-12-01

    Floods are devastating natural hazards. To provide accurate, precise and timely flood forecasts there is a need to understand the uncertainties associated with temporal rainfall and model parameters. The estimation of temporal rainfall and model parameter distributions from streamflow observations in complex dynamic catchments adds skill to current areal rainfall estimation methods, allows for the uncertainty of rainfall input to be considered when estimating model parameters and provides the ability to estimate rainfall from poorly gauged catchments. Current methods to estimate temporal rainfall distributions from streamflow are unable to adequately explain and invert complex non-linear hydrologic systems. This study uses the Discrete Wavelet Transform (DWT) to reduce rainfall dimensionality for the catchment of Warwick, Queensland, Australia. The reduction of rainfall to DWT coefficients allows the input rainfall time series to be simultaneously estimated along with model parameters. The estimation process is conducted using multi-chain Markov chain Monte Carlo simulation with the DREAMZS algorithm. The use of a likelihood function that considers both rainfall and streamflow error allows for model parameter and temporal rainfall distributions to be estimated. Estimation of the wavelet approximation coefficients of lower order decomposition structures was able to estimate the most realistic temporal rainfall distributions. These rainfall estimates were all able to simulate streamflow that was superior to the results of a traditional calibration approach. It is shown that the choice of wavelet has a considerable impact on the robustness of the inversion. The results demonstrate that streamflow data contains sufficient information to estimate temporal rainfall and model parameter distributions. The extent and variance of rainfall time series that are able to simulate streamflow that is superior to that simulated by a traditional calibration approach is a

  4. Estimating Lead (Pb) Bioavailability In A Mouse Model

    Science.gov (United States)

    Children are exposed to Pb through ingestion of Pb-contaminated soil. Soil Pb bioavailability is estimated using animal models or with chemically defined in vitro assays that measure bioaccessibility. However, bioavailability estimates in a large animal model (e.g., swine) can be...

  5. Estimating Dynamic Equilibrium Models using Macro and Financial Data

    DEFF Research Database (Denmark)

    Christensen, Bent Jesper; Posch, Olaf; van der Wel, Michel

    We show that including financial market data at daily frequency, along with macro series at standard lower frequency, facilitates statistical inference on structural parameters in dynamic equilibrium models. Our continuous-time formulation conveniently accounts for the difference in observation...... of the estimators and estimate the model using 20 years of U.S. macro and financial data....

  6. mathematical models for estimating radio channels utilization

    African Journals Online (AJOL)

    2017-08-08

    Aug 8, 2017 ... Mathematical models for radio channels utilization assessment by real-time flows transfer in ... data transmission networks application having dynamic topology ..... Journal of Applied Mathematics and Statistics, 56(2): 85–90.

  7. Linear Regression Models for Estimating True Subsurface ...

    Indian Academy of Sciences (India)

    47

    The objective is to minimize the processing time and computer memory required. 10 to carry out inversion .... to the mainland by two long bridges. .... term. In this approach, the model converges when the squared sum of the differences. 143.

  8. A matlab framework for estimation of NLME models using stochastic differential equations: applications for estimation of insulin secretion rates.

    Science.gov (United States)

    Mortensen, Stig B; Klim, Søren; Dammann, Bernd; Kristensen, Niels R; Madsen, Henrik; Overgaard, Rune V

    2007-10-01

    The non-linear mixed-effects model based on stochastic differential equations (SDEs) provides an attractive residual error model, that is able to handle serially correlated residuals typically arising from structural mis-specification of the true underlying model. The use of SDEs also opens up for new tools for model development and easily allows for tracking of unknown inputs and parameters over time. An algorithm for maximum likelihood estimation of the model has earlier been proposed, and the present paper presents the first general implementation of this algorithm. The implementation is done in Matlab and also demonstrates the use of parallel computing for improved estimation times. The use of the implementation is illustrated by two examples of application which focus on the ability of the model to estimate unknown inputs facilitated by the extension to SDEs. The first application is a deconvolution-type estimation of the insulin secretion rate based on a linear two-compartment model for C-peptide measurements. In the second application the model is extended to also give an estimate of the time varying liver extraction based on both C-peptide and insulin measurements.

  9. GLUE Based Uncertainty Estimation of Urban Drainage Modeling Using Weather Radar Precipitation Estimates

    DEFF Research Database (Denmark)

    Nielsen, Jesper Ellerbæk; Thorndahl, Søren Liedtke; Rasmussen, Michael R.

    2011-01-01

    Distributed weather radar precipitation measurements are used as rainfall input for an urban drainage model, to simulate the runoff from a small catchment of Denmark. It is demonstrated how the Generalized Likelihood Uncertainty Estimation (GLUE) methodology can be implemented and used to estimate...

  10. Ballistic model to estimate microsprinkler droplet distribution

    Directory of Open Access Journals (Sweden)

    Conceição Marco Antônio Fonseca

    2003-01-01

    Full Text Available Experimental determination of microsprinkler droplets is difficult and time-consuming. This determination, however, could be achieved using ballistic models. The present study aimed to compare simulated and measured values of microsprinkler droplet diameters. Experimental measurements were made using the flour method, and simulations using a ballistic model adopted by the SIRIAS computational software. Drop diameters quantified in the experiment varied between 0.30 mm and 1.30 mm, while the simulated between 0.28 mm and 1.06 mm. The greatest differences between simulated and measured values were registered at the highest radial distance from the emitter. The model presented a performance classified as excellent for simulating microsprinkler drop distribution.

  11. Estimation of some stochastic models used in reliability engineering

    International Nuclear Information System (INIS)

    Huovinen, T.

    1989-04-01

    The work aims to study the estimation of some stochastic models used in reliability engineering. In reliability engineering continuous probability distributions have been used as models for the lifetime of technical components. We consider here the following distributions: exponential, 2-mixture exponential, conditional exponential, Weibull, lognormal and gamma. Maximum likelihood method is used to estimate distributions from observed data which may be either complete or censored. We consider models based on homogeneous Poisson processes such as gamma-poisson and lognormal-poisson models for analysis of failure intensity. We study also a beta-binomial model for analysis of failure probability. The estimators of the parameters for three models are estimated by the matching moments method and in the case of gamma-poisson and beta-binomial models also by maximum likelihood method. A great deal of mathematical or statistical problems that arise in reliability engineering can be solved by utilizing point processes. Here we consider the statistical analysis of non-homogeneous Poisson processes to describe the failing phenomena of a set of components with a Weibull intensity function. We use the method of maximum likelihood to estimate the parameters of the Weibull model. A common cause failure can seriously reduce the reliability of a system. We consider a binomial failure rate (BFR) model as an application of the marked point processes for modelling common cause failure in a system. The parameters of the binomial failure rate model are estimated with the maximum likelihood method

  12. Effect of process variables on the Drucker-Prager cap model and residual stress distribution of tablets estimated by the finite element method.

    Science.gov (United States)

    Hayashi, Yoshihiro; Otoguro, Saori; Miura, Takahiro; Onuki, Yoshinori; Obata, Yasuko; Takayama, Kozo

    2014-01-01

    A multivariate statistical technique was applied to clarify the causal correlation between variables in the manufacturing process and the residual stress distribution of tablets. Theophylline tablets were prepared according to a Box-Behnken design using the wet granulation method. Water amounts (X1), kneading time (X2), lubricant-mixing time (X3), and compression force (X4) were selected as design variables. The Drucker-Prager cap (DPC) model was selected as the method for modeling the mechanical behavior of pharmaceutical powders. Simulation parameters, such as Young's modulus, Poisson rate, internal friction angle, plastic deformation parameters, and initial density of the powder, were measured. Multiple regression analysis demonstrated that the simulation parameters were significantly affected by process variables. The constructed DPC models were fed into the analysis using the finite element method (FEM), and the mechanical behavior of pharmaceutical powders during the tableting process was analyzed using the FEM. The results of this analysis revealed that the residual stress distribution of tablets increased with increasing X4. Moreover, an interaction between X2 and X3 also had an effect on shear and the x-axial residual stress of tablets. Bayesian network analysis revealed causal relationships between the process variables, simulation parameters, residual stress distribution, and pharmaceutical responses of tablets. These results demonstrated the potential of the FEM as a tool to help improve our understanding of the residual stress of tablets and to optimize process variables, which not only affect tablet characteristics, but also are risks of causing tableting problems.

  13. Cokriging model for estimation of water table elevation

    International Nuclear Information System (INIS)

    Hoeksema, R.J.; Clapp, R.B.; Thomas, A.L.; Hunley, A.E.; Farrow, N.D.; Dearstone, K.C.

    1989-01-01

    In geological settings where the water table is a subdued replica of the ground surface, cokriging can be used to estimate the water table elevation at unsampled locations on the basis of values of water table elevation and ground surface elevation measured at wells and at points along flowing streams. The ground surface elevation at the estimation point must also be determined. In the proposed method, separate models are generated for the spatial variability of the water table and ground surface elevation and for the dependence between these variables. After the models have been validated, cokriging or minimum variance unbiased estimation is used to obtain the estimated water table elevations and their estimation variances. For the Pits and Trenches area (formerly a liquid radioactive waste disposal facility) near Oak Ridge National Laboratory, water table estimation along a linear section, both with and without the inclusion of ground surface elevation as a statistical predictor, illustrate the advantages of the cokriging model

  14. Weibull Parameters Estimation Based on Physics of Failure Model

    DEFF Research Database (Denmark)

    Kostandyan, Erik; Sørensen, John Dalsgaard

    2012-01-01

    Reliability estimation procedures are discussed for the example of fatigue development in solder joints using a physics of failure model. The accumulated damage is estimated based on a physics of failure model, the Rainflow counting algorithm and the Miner’s rule. A threshold model is used...... for degradation modeling and failure criteria determination. The time dependent accumulated damage is assumed linearly proportional to the time dependent degradation level. It is observed that the deterministic accumulated damage at the level of unity closely estimates the characteristic fatigue life of Weibull...

  15. Model-based Small Area Estimates of Cancer Risk Factors and Screening Behaviors - Small Area Estimates

    Science.gov (United States)

    These model-based estimates use two surveys, the Behavioral Risk Factor Surveillance System (BRFSS) and the National Health Interview Survey (NHIS). The two surveys are combined using novel statistical methodology.

  16. Parameters Estimation of Geographically Weighted Ordinal Logistic Regression (GWOLR) Model

    Science.gov (United States)

    Zuhdi, Shaifudin; Retno Sari Saputro, Dewi; Widyaningsih, Purnami

    2017-06-01

    A regression model is the representation of relationship between independent variable and dependent variable. The dependent variable has categories used in the logistic regression model to calculate odds on. The logistic regression model for dependent variable has levels in the logistics regression model is ordinal. GWOLR model is an ordinal logistic regression model influenced the geographical location of the observation site. Parameters estimation in the model needed to determine the value of a population based on sample. The purpose of this research is to parameters estimation of GWOLR model using R software. Parameter estimation uses the data amount of dengue fever patients in Semarang City. Observation units used are 144 villages in Semarang City. The results of research get GWOLR model locally for each village and to know probability of number dengue fever patient categories.

  17. A decision model to estimate the cost-effectiveness of intensity modulated radiation therapy (IMRT) compared to three dimensional conformal radiation therapy (3DCRT) in patients receiving radiotherapy to the prostate bed

    International Nuclear Information System (INIS)

    Carter, Hannah E.; Martin, Andrew; Schofield, Deborah; Duchesne, Gillian; Haworth, Annette; Hornby, Colin; Sidhom, Mark; Jackson, Michael

    2014-01-01

    Background: Intensity modulated radiation therapy (IMRT) is a radiation therapy technology that facilitates the delivery of an improved dose distribution with less dose to surrounding critical structures. This study estimates the longer term effectiveness and cost-effectiveness of IMRT in patients post radical prostatectomy. Methods: A Markov decision model was developed to calculate the incremental quality adjusted life years (QALYs) and costs of IMRT compared with three dimensional conformal radiation therapy (3DCRT). Costs were estimated from the perspective of the Australian health care system. Results: IMRT was both more effective and less costly than 3DCRT over 20 years, with an additional 20 QALYs gained and over $1.1 million saved per 1000 patients treated. This result was robust to plausible levels of uncertainty. Conclusions: IMRT was estimated to have a modest long term advantage over 3DCRT in terms of both improved effectiveness and reduced cost. This result was reliant on clinical judgement and interpretation of the existing literature, but provides quantitative guidance on the cost effectiveness of IMRT whilst long term trial evidence is awaited

  18. NONLINEAR PLANT PIECEWISE-CONTINUOUS MODEL MATRIX PARAMETERS ESTIMATION

    Directory of Open Access Journals (Sweden)

    Roman L. Leibov

    2017-09-01

    Full Text Available This paper presents a nonlinear plant piecewise-continuous model matrix parameters estimation technique using nonlinear model time responses and random search method. One of piecewise-continuous model application areas is defined. The results of proposed approach application for aircraft turbofan engine piecewisecontinuous model formation are presented

  19. Extreme gust wind estimation using mesoscale modeling

    DEFF Research Database (Denmark)

    Larsén, Xiaoli Guo; Kruger, Andries

    2014-01-01

    , surface turbulence characteristics. In this study, we follow a theory that is different from the local gust concept as described above. In this theory, the gust at the surface is non-local; it is produced by the deflection of air parcels flowing in the boundary layer and brought down to the surface...... from the Danish site Høvsøre help us to understand the limitation of the traditional method. Good agreement was found between the extreme gust atlases for South Africa and the existing map made from a limited number of measurements across the country. Our study supports the non-local gust theory. While...... through turbulent eddies. This process is modeled using the mesoscale Weather Forecasting and Research (WRF) model. The gust at the surface is calculated as the largest winds over a layer where the averaged turbulence kinetic energy is greater than the averaged buoyancy force. The experiments have been...

  20. Model-based estimation for dynamic cardiac studies using ECT

    International Nuclear Information System (INIS)

    Chiao, P.C.; Rogers, W.L.; Clinthorne, N.H.; Fessler, J.A.; Hero, A.O.

    1994-01-01

    In this paper, the authors develop a strategy for joint estimation of physiological parameters and myocardial boundaries using ECT (Emission Computed Tomography). The authors construct an observation model to relate parameters of interest to the projection data and to account for limited ECT system resolution and measurement noise. The authors then use a maximum likelihood (ML) estimator to jointly estimate all the parameters directly from the projection data without reconstruction of intermediate images. The authors also simulate myocardial perfusion studies based on a simplified heart model to evaluate the performance of the model-based joint ML estimator and compare this performance to the Cramer-Rao lower bound. Finally, model assumptions and potential uses of the joint estimation strategy are discussed

  1. Model-based estimation for dynamic cardiac studies using ECT.

    Science.gov (United States)

    Chiao, P C; Rogers, W L; Clinthorne, N H; Fessler, J A; Hero, A O

    1994-01-01

    The authors develop a strategy for joint estimation of physiological parameters and myocardial boundaries using ECT (emission computed tomography). They construct an observation model to relate parameters of interest to the projection data and to account for limited ECT system resolution and measurement noise. The authors then use a maximum likelihood (ML) estimator to jointly estimate all the parameters directly from the projection data without reconstruction of intermediate images. They also simulate myocardial perfusion studies based on a simplified heart model to evaluate the performance of the model-based joint ML estimator and compare this performance to the Cramer-Rao lower bound. Finally, the authors discuss model assumptions and potential uses of the joint estimation strategy.

  2. Annual effective dose due to residential radon progeny in Sweden: Evaluations based on current risk projections models and on risk estimates from a nation-wide Swedish epidemiological study

    Energy Technology Data Exchange (ETDEWEB)

    Doi, M [National Inst. of Radiological Sciences, Chiba (Japan); Lagarde, F [Karolinska Inst., Stockholm (Sweden). Inst. of Environmental Medicine; Falk, R; Swedjemark, G A [Swedish Radiation Protection Inst., Stockholm (Sweden)

    1996-12-01

    Effective dose per unit radon progeny exposure to Swedish population in 1992 is estimated by the risk projection model based on the Swedish epidemiological study of radon and lung cancer. The resulting values range from 1.29 - 3.00 mSv/WLM and 2.58 - 5.99 mSv/WLM, respectively. Assuming a radon concentration of 100 Bq/m{sup 3}, an equilibrium factor of 0.4 and an occupancy factor of 0.6 in Swedish houses, the annual effective dose for the Swedish population is estimated to be 0.43 - 1.98 mSv/year, which should be compared to the value of 1.9 mSv/year, according to the UNSCEAR 1993 report. 27 refs, tabs, figs.

  3. Annual effective dose due to residential radon progeny in Sweden: Evaluations based on current risk projections models and on risk estimates from a nation-wide Swedish epidemiological study

    International Nuclear Information System (INIS)

    Doi, M.; Lagarde, F.

    1996-12-01

    Effective dose per unit radon progeny exposure to Swedish population in 1992 is estimated by the risk projection model based on the Swedish epidemiological study of radon and lung cancer. The resulting values range from 1.29 - 3.00 mSv/WLM and 2.58 - 5.99 mSv/WLM, respectively. Assuming a radon concentration of 100 Bq/m 3 , an equilibrium factor of 0.4 and an occupancy factor of 0.6 in Swedish houses, the annual effective dose for the Swedish population is estimated to be 0.43 - 1.98 mSv/year, which should be compared to the value of 1.9 mSv/year, according to the UNSCEAR 1993 report. 27 refs, tabs, figs

  4. Parameter estimation in stochastic rainfall-runoff models

    DEFF Research Database (Denmark)

    Jonsdottir, Harpa; Madsen, Henrik; Palsson, Olafur Petur

    2006-01-01

    A parameter estimation method for stochastic rainfall-runoff models is presented. The model considered in the paper is a conceptual stochastic model, formulated in continuous-discrete state space form. The model is small and a fully automatic optimization is, therefore, possible for estimating all...... the parameter values are optimal for simulation or prediction. The data originates from Iceland and the model is designed for Icelandic conditions, including a snow routine for mountainous areas. The model demands only two input data series, precipitation and temperature and one output data series...

  5. Robust Models for Operator Workload Estimation

    Science.gov (United States)

    2015-03-01

    piloted aircraft (RPA) simultaneously, a vast improvement in resource utilization compared to existing operations that require several operators per...into distinct cognitive channels (visual, auditory, spatial, etc.) based on our ability to multitask effectively as long as no one channel is

  6. Modeling reactive transport with particle tracking and kernel estimators

    Science.gov (United States)

    Rahbaralam, Maryam; Fernandez-Garcia, Daniel; Sanchez-Vila, Xavier

    2015-04-01

    Groundwater reactive transport models are useful to assess and quantify the fate and transport of contaminants in subsurface media and are an essential tool for the analysis of coupled physical, chemical, and biological processes in Earth Systems. Particle Tracking Method (PTM) provides a computationally efficient and adaptable approach to solve the solute transport partial differential equation. On a molecular level, chemical reactions are the result of collisions, combinations, and/or decay of different species. For a well-mixed system, the chem- ical reactions are controlled by the classical thermodynamic rate coefficient. Each of these actions occurs with some probability that is a function of solute concentrations. PTM is based on considering that each particle actually represents a group of molecules. To properly simulate this system, an infinite number of particles is required, which is computationally unfeasible. On the other hand, a finite number of particles lead to a poor-mixed system which is limited by diffusion. Recent works have used this effect to actually model incomplete mix- ing in naturally occurring porous media. In this work, we demonstrate that this effect in most cases should be attributed to a defficient estimation of the concentrations and not to the occurrence of true incomplete mixing processes in porous media. To illustrate this, we show that a Kernel Density Estimation (KDE) of the concentrations can approach the well-mixed solution with a limited number of particles. KDEs provide weighting functions of each particle mass that expands its region of influence, hence providing a wider region for chemical reactions with time. Simulation results show that KDEs are powerful tools to improve state-of-the-art simulations of chemical reactions and indicates that incomplete mixing in diluted systems should be modeled based on alternative conceptual models and not on a limited number of particles.

  7. Estimation of effective dose during hysterosalpingography procedures

    International Nuclear Information System (INIS)

    Alzimamil, K.; Babikir, E.; Alkhorayef, M.; Sulieman, A.; Alsafi, K.; Omer, H.

    2014-08-01

    Hysterosalpingography (HSG) is the most frequently used diagnostic tool to evaluate the endometrial cavity and fallopian tube by using conventional x-ray or fluoroscopy. Determination of the patient radiation doses values from x-ray examinations provides useful guidance on where best to concentrate efforts on patient dose reduction in order to optimize the protection of the patients. The aims of this study were to measure the patients entrance surface air kerma doses (ESA K), effective doses and to compare practices between different hospitals in Sudan. ESA K were measured for patient using calibrated thermo luminance dosimeters (TLDs, Gr-200A). Effective doses were estimated using National Radiological Protection Board (NRPB) software. This study was conducted in five radiological departments: Two Teaching Hospitals (A and D), two private hospitals (B and C) and one University Hospital (E). The mean ESD was 20.1 mGy, 28.9 mGy, 13.6 mGy, 58.65 mGy, 35.7, 22.4 and 19.6 mGy for hospitals A,B,C,D, and E), respectively. The mean effective dose was 2.4 mSv, 3.5 mSv, 1.6 mSv, 7.1 mSv and 4.3 mSv in the same order. The study showed wide variations in the ESDs with three of the hospitals having values above the internationally reported values. Number of x-ray images, fluoroscopy time, operator skills x-ray machine type and clinical complexity of the procedures were shown to be major contributors to the variations reported. Results demonstrated the need for standardization of technique throughout the hospital. The results also suggest that there is a need to optimize the procedures. Local DRLs were proposed for the entire procedures. (author)

  8. Kriging with mixed effects models

    Directory of Open Access Journals (Sweden)

    Alessio Pollice

    2007-10-01

    Full Text Available In this paper the effectiveness of the use of mixed effects models for estimation and prediction purposes in spatial statistics for continuous data is reviewed in the classical and Bayesian frameworks. A case study on agricultural data is also provided.

  9. Model improves oil field operating cost estimates

    International Nuclear Information System (INIS)

    Glaeser, J.L.

    1996-01-01

    A detailed operating cost model that forecasts operating cost profiles toward the end of a field's life should be constructed for testing depletion strategies and plans for major oil fields. Developing a good understanding of future operating cost trends is important. Incorrectly forecasting the trend can result in bad decision making regarding investments and reservoir operating strategies. Recent projects show that significant operating expense reductions can be made in the latter stages o field depletion without significantly reducing the expected ultimate recoverable reserves. Predicting future operating cost trends is especially important for operators who are currently producing a field and must forecast the economic limit of the property. For reasons presented in this article, it is usually not correct to either assume that operating expense stays fixed in dollar terms throughout the lifetime of a field, nor is it correct to assume that operating costs stay fixed on a dollar per barrel basis

  10. Explicit estimating equations for semiparametric generalized linear latent variable models

    KAUST Repository

    Ma, Yanyuan

    2010-07-05

    We study generalized linear latent variable models without requiring a distributional assumption of the latent variables. Using a geometric approach, we derive consistent semiparametric estimators. We demonstrate that these models have a property which is similar to that of a sufficient complete statistic, which enables us to simplify the estimating procedure and explicitly to formulate the semiparametric estimating equations. We further show that the explicit estimators have the usual root n consistency and asymptotic normality. We explain the computational implementation of our method and illustrate the numerical performance of the estimators in finite sample situations via extensive simulation studies. The advantage of our estimators over the existing likelihood approach is also shown via numerical comparison. We employ the method to analyse a real data example from economics. © 2010 Royal Statistical Society.

  11. Fundamental Frequency and Model Order Estimation Using Spatial Filtering

    DEFF Research Database (Denmark)

    Karimian-Azari, Sam; Jensen, Jesper Rindom; Christensen, Mads Græsbøll

    2014-01-01

    extend this procedure to account for inharmonicity using unconstrained model order estimation. The simulations show that beamforming improves the performance of the joint estimates of fundamental frequency and the number of harmonics in low signal to interference (SIR) levels, and an experiment......In signal processing applications of harmonic-structured signals, estimates of the fundamental frequency and number of harmonics are often necessary. In real scenarios, a desired signal is contaminated by different levels of noise and interferers, which complicate the estimation of the signal...... parameters. In this paper, we present an estimation procedure for harmonic-structured signals in situations with strong interference using spatial filtering, or beamforming. We jointly estimate the fundamental frequency and the constrained model order through the output of the beamformers. Besides that, we...

  12. Estimating varying coefficients for partial differential equation models.

    Science.gov (United States)

    Zhang, Xinyu; Cao, Jiguo; Carroll, Raymond J

    2017-09-01

    Partial differential equations (PDEs) are used to model complex dynamical systems in multiple dimensions, and their parameters often have important scientific interpretations. In some applications, PDE parameters are not constant but can change depending on the values of covariates, a feature that we call varying coefficients. We propose a parameter cascading method to estimate varying coefficients in PDE models from noisy data. Our estimates of the varying coefficients are shown to be consistent and asymptotically normally distributed. The performance of our method is evaluated by a simulation study and by an empirical study estimating three varying coefficients in a PDE model arising from LIDAR data. © 2017, The International Biometric Society.

  13. Estimation of Aging Effects on LOHS for CANDU-6

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, Yong Ki; Moon, Bok Ja; Kim, Seoung Rae [Nuclear Engineering Service and Solution Co. Ltd., Daejeon (Korea, Republic of)

    2014-05-15

    To evaluate the Wolsong Unit 1's capacity to respond to large-scale natural disaster exceeding design, the loss of heat sink(LOHS) accident accompanied by loss of all electric power is simulated as a beyond design basis accident. This analysis is considered the aging effects of plant as the consequences of LOHS accident. Various components of primary heat transport system(PHTS) get aged and some of the important aging effects of CANDU reactor are pressure tube(PT) diametral creep, steam generator(SG) U-tube fouling, increased feeder roughness, and feeder orifice degradation. These effects result in higher inlet header temperatures, reduced flows in some fuel channels, and higher void fraction in fuel channel outlets. Fresh and aged models are established for the analysis where fresh model is the circuit model simulating the conditions at retubing and aged model corresponds to the model reflecting the aged condition at 11 EFPY after retubing. CATHENA computer code[1] is used for the analysis of the system behavior under LOHS condition. The LOHS accident is analyzed for fresh and aged models using CATHENA thermal hydraulic computer code. The decay heat removal is one of the most important factors for mitigation of this accident. The major aging effect on decay heat removal is the reduction of heat transfer efficiency by steam generator. Thus, the channel failure time cannot be conservatively estimated if aged model is applied for the analysis of this accident.

  14. Comparing estimates of genetic variance across different relationship models.

    Science.gov (United States)

    Legarra, Andres

    2016-02-01

    Use of relationships between individuals to estimate genetic variances and heritabilities via mixed models is standard practice in human, plant and livestock genetics. Different models or information for relationships may give different estimates of genetic variances. However, comparing these estimates across different relationship models is not straightforward as the implied base populations differ between relationship models. In this work, I present a method to compare estimates of variance components across different relationship models. I suggest referring genetic variances obtained using different relationship models to the same reference population, usually a set of individuals in the population. Expected genetic variance of this population is the estimated variance component from the mixed model times a statistic, Dk, which is the average self-relationship minus the average (self- and across-) relationship. For most typical models of relationships, Dk is close to 1. However, this is not true for very deep pedigrees, for identity-by-state relationships, or for non-parametric kernels, which tend to overestimate the genetic variance and the heritability. Using mice data, I show that heritabilities from identity-by-state and kernel-based relationships are overestimated. Weighting these estimates by Dk scales them to a base comparable to genomic or pedigree relationships, avoiding wrong comparisons, for instance, "missing heritabilities". Copyright © 2015 Elsevier Inc. All rights reserved.

  15. Information matrix estimation procedures for cognitive diagnostic models.

    Science.gov (United States)

    Liu, Yanlou; Xin, Tao; Andersson, Björn; Tian, Wei

    2018-03-06

    Two new methods to estimate the asymptotic covariance matrix for marginal maximum likelihood estimation of cognitive diagnosis models (CDMs), the inverse of the observed information matrix and the sandwich-type estimator, are introduced. Unlike several previous covariance matrix estimators, the new methods take into account both the item and structural parameters. The relationships between the observed information matrix, the empirical cross-product information matrix, the sandwich-type covariance matrix and the two approaches proposed by de la Torre (2009, J. Educ. Behav. Stat., 34, 115) are discussed. Simulation results show that, for a correctly specified CDM and Q-matrix or with a slightly misspecified probability model, the observed information matrix and the sandwich-type covariance matrix exhibit good performance with respect to providing consistent standard errors of item parameter estimates. However, with substantial model misspecification only the sandwich-type covariance matrix exhibits robust performance. © 2018 The British Psychological Society.

  16. Ecotoxicological effects extrapolation models

    Energy Technology Data Exchange (ETDEWEB)

    Suter, G.W. II

    1996-09-01

    One of the central problems of ecological risk assessment is modeling the relationship between test endpoints (numerical summaries of the results of toxicity tests) and assessment endpoints (formal expressions of the properties of the environment that are to be protected). For example, one may wish to estimate the reduction in species richness of fishes in a stream reach exposed to an effluent and have only a fathead minnow 96 hr LC50 as an effects metric. The problem is to extrapolate from what is known (the fathead minnow LC50) to what matters to the decision maker, the loss of fish species. Models used for this purpose may be termed Effects Extrapolation Models (EEMs) or Activity-Activity Relationships (AARs), by analogy to Structure-Activity Relationships (SARs). These models have been previously reviewed in Ch. 7 and 9 of and by an OECD workshop. This paper updates those reviews and attempts to further clarify the issues involved in the development and use of EEMs. Although there is some overlap, this paper does not repeat those reviews and the reader is referred to the previous reviews for a more complete historical perspective, and for treatment of additional extrapolation issues.

  17. Estimating cardiovascular disease incidence from prevalence: a spreadsheet based model

    Directory of Open Access Journals (Sweden)

    Xue Feng Hu

    2017-01-01

    Full Text Available Abstract Background Disease incidence and prevalence are both core indicators of population health. Incidence is generally not as readily accessible as prevalence. Cohort studies and electronic health record systems are two major way to estimate disease incidence. The former is time-consuming and expensive; the latter is not available in most developing countries. Alternatively, mathematical models could be used to estimate disease incidence from prevalence. Methods We proposed and validated a method to estimate the age-standardized incidence of cardiovascular disease (CVD, with prevalence data from successive surveys and mortality data from empirical studies. Hallett’s method designed for estimating HIV infections in Africa was modified to estimate the incidence of myocardial infarction (MI in the U.S. population and incidence of heart disease in the Canadian population. Results Model-derived estimates were in close agreement with observed incidence from cohort studies and population surveillance systems. This method correctly captured the trend in incidence given sufficient waves of cross-sectional surveys. The estimated MI declining rate in the U.S. population was in accordance with the literature. This method was superior to closed cohort, in terms of the estimating trend of population cardiovascular disease incidence. Conclusion It is possible to estimate CVD incidence accurately at the population level from cross-sectional prevalence data. This method has the potential to be used for age- and sex- specific incidence estimates, or to be expanded to other chronic conditions.

  18. Estimation of site effects in Beijing City

    International Nuclear Information System (INIS)

    Ding, Z.; Chen, Y.T.; Panza, G.F.

    2002-01-01

    For the realistic modeling of the seismic ground motion in lateral heterogeneous anelastic media, the database of 3-D geophysical structures for Beijing City has been built up to model the seismic ground motion in the City, caused by the 1976 Tangshan and the 1998 Zhangbei earthquakes. The hybrid method, that combines the modal summation and the finite difference algorithms, is used in the simulation. The modeling of the seismic ground motion for both the Tangshan and the Zhangbei earthquakes shows that the thick Quaternary sedimentary cover amplifies the peak values and increases the duration of the seismic ground motion in the northwest part of the City. Therefore the thickness of the Quaternary sediments in Beijing City is the key factor that controls the local ground effects, and four zones are defined on the base of the different thickness of the Quaternary sediments. The response spectra for each zone are computed, indicating that peak spectral values as high as 0.1g are compatible with past seismicity and can be well exceeded if an event similar to the 1697 Sanhe-Pinggu occurs. (author)

  19. Estimation of Site Effects in Beijing City

    Science.gov (United States)

    Ding, Z.; Chen, Y. T.; Panza, G. F.

    For the realistic modeling of the seismic ground motion in lateral heterogeneous anelastic media, the database of 3-D geophysical structures for Beijing City has been built up to model the seismic ground motion in the City, caused by the 1976 Tangshan and the 1998 Zhangbei earthquakes. The hybrid method, which combines the modal summation and the finite-difference algorithms, is used in the simulation. The modeling of the seismic ground motion, for both the Tangshan and the Zhangbei earthquakes, shows that the thick Quaternary sedimentary cover amplifies the peak values and increases the duration of the seismic ground motion in the northwestern part of the City. Therefore the thickness of the Quaternary sediments in Beijing City is the key factor controling the local ground effects. Four zones are defined on the base of the different thickness of the Quaternary sediments. The response spectra for each zone are computed, indicating that peak spectral values as high as 0.1 g are compatible with past seismicity and can be well exceeded if an event similar to the 1697 Sanhe-Pinggu occurs.

  20. Asymptotics for Estimating Equations in Hidden Markov Models

    DEFF Research Database (Denmark)

    Hansen, Jørgen Vinsløv; Jensen, Jens Ledet

    Results on asymptotic normality for the maximum likelihood estimate in hidden Markov models are extended in two directions. The stationarity assumption is relaxed, which allows for a covariate process influencing the hidden Markov process. Furthermore a class of estimating equations is considered...

  1. Parameter Estimation for a Computable General Equilibrium Model

    DEFF Research Database (Denmark)

    Arndt, Channing; Robinson, Sherman; Tarp, Finn

    2002-01-01

    We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of non-linear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...

  2. Parameter Estimation for a Computable General Equilibrium Model

    DEFF Research Database (Denmark)

    Arndt, Channing; Robinson, Sherman; Tarp, Finn

    We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of nonlinear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...

  3. Person Appearance Modeling and Orientation Estimation using Spherical Harmonics

    NARCIS (Netherlands)

    Liem, M.C.; Gavrila, D.M.

    2013-01-01

    We present a novel approach for the joint estimation of a person's overall body orientation, 3D shape and texture, from overlapping cameras. Overall body orientation (i.e. rotation around torso major axis) is estimated by minimizing the difference between a learned texture model in a canonical

  4. Inverse Gaussian model for small area estimation via Gibbs sampling

    African Journals Online (AJOL)

    We present a Bayesian method for estimating small area parameters under an inverse Gaussian model. The method is extended to estimate small area parameters for finite populations. The Gibbs sampler is proposed as a mechanism for implementing the Bayesian paradigm. We illustrate the method by application to ...

  5. Performances of estimators of linear auto-correlated error model ...

    African Journals Online (AJOL)

    The performances of five estimators of linear models with autocorrelated disturbance terms are compared when the independent variable is exponential. The results reveal that for both small and large samples, the Ordinary Least Squares (OLS) compares favourably with the Generalized least Squares (GLS) estimators in ...

  6. Nonparametric volatility density estimation for discrete time models

    NARCIS (Netherlands)

    Es, van Bert; Spreij, P.J.C.; Zanten, van J.H.

    2005-01-01

    We consider discrete time models for asset prices with a stationary volatility process. We aim at estimating the multivariate density of this process at a set of consecutive time instants. A Fourier-type deconvolution kernel density estimator based on the logarithm of the squared process is proposed

  7. E-model MOS Estimate Improvement through Jitter Buffer Packet Loss Modelling

    Directory of Open Access Journals (Sweden)

    Adrian Kovac

    2011-01-01

    Full Text Available Proposed article analyses dependence of MOS as a voice call quality (QoS measure estimated through ITU-T E-model under real network conditions with jitter. In this paper, a method of jitter effect is proposed. Jitter as voice packet time uncertainty appears as increased packet loss caused by jitter memory buffer under- or overflow. Jitter buffer behaviour at receiver’s side is modelled as Pareto/D/1/K system with Pareto-distributed packet interarrival times and its performance is experimentally evaluated by using statistic tools. Jitter buffer stochastic model is then incorporated into E-model in an additive manner accounting for network jitter effects via excess packet loss complementing measured network packet loss. Proposed modification of E-model input parameter adds two degrees of freedom in modelling: network jitter and jitter buffer size.

  8. Parameter Estimates in Differential Equation Models for Population Growth

    Science.gov (United States)

    Winkel, Brian J.

    2011-01-01

    We estimate the parameters present in several differential equation models of population growth, specifically logistic growth models and two-species competition models. We discuss student-evolved strategies and offer "Mathematica" code for a gradient search approach. We use historical (1930s) data from microbial studies of the Russian biologist,…

  9. Review Genetic prediction models and heritability estimates for ...

    African Journals Online (AJOL)

    edward

    2015-05-09

    May 9, 2015 ... Instead, through stepwise inclusion of type traits in the PH model, the .... Great Britain uses a bivariate animal model for all breeds, ... Štípková, 2012) and then applying linear models to the combined datasets with the ..... multivariate analyses, it is difficult to use indicator traits to estimate longevity early in life ...

  10. Parameter estimation of electricity spot models from futures prices

    NARCIS (Netherlands)

    Aihara, ShinIchi; Bagchi, Arunabha; Imreizeeq, E.S.N.; Walter, E.

    We consider a slight perturbation of the Schwartz-Smith model for the electricity futures prices and the resulting modified spot model. Using the martingale property of the modified price under the risk neutral measure, we derive the arbitrage free model for the spot and futures prices. We estimate

  11. Estimating the Competitive Storage Model with Trending Commodity Prices

    OpenAIRE

    Gouel , Christophe; LEGRAND , Nicolas

    2017-01-01

    We present a method to estimate jointly the parameters of a standard commodity storage model and the parameters characterizing the trend in commodity prices. This procedure allows the influence of a possible trend to be removed without restricting the model specification, and allows model and trend selection based on statistical criteria. The trend is modeled deterministically using linear or cubic spline functions of time. The results show that storage models with trend are always preferred ...

  12. Development of simple kinetic models and parameter estimation for ...

    African Journals Online (AJOL)

    PANCHIGA

    2016-09-28

    Sep 28, 2016 ... estimation for simulation of recombinant human serum albumin ... and recombinant protein production by P. pastoris without requiring complex models. Key words: ..... SDS-PAGE and showed the same molecular size as.

  13. COPS model estimates of LLEA availability near selected reactor sites

    International Nuclear Information System (INIS)

    Berkbigler, K.P.

    1979-11-01

    The COPS computer model has been used to estimate local law enforcement agency (LLEA) officer availability in the neighborhood of selected nuclear reactor sites. The results of these analyses are presented both in graphic and tabular form in this report

  14. Censored rainfall modelling for estimation of fine-scale extremes

    Science.gov (United States)

    Cross, David; Onof, Christian; Winter, Hugo; Bernardara, Pietro

    2018-01-01

    Reliable estimation of rainfall extremes is essential for drainage system design, flood mitigation, and risk quantification. However, traditional techniques lack physical realism and extrapolation can be highly uncertain. In this study, we improve the physical basis for short-duration extreme rainfall estimation by simulating the heavy portion of the rainfall record mechanistically using the Bartlett-Lewis rectangular pulse (BLRP) model. Mechanistic rainfall models have had a tendency to underestimate rainfall extremes at fine temporal scales. Despite this, the simple process representation of rectangular pulse models is appealing in the context of extreme rainfall estimation because it emulates the known phenomenology of rainfall generation. A censored approach to Bartlett-Lewis model calibration is proposed and performed for single-site rainfall from two gauges in the UK and Germany. Extreme rainfall estimation is performed for each gauge at the 5, 15, and 60 min resolutions, and considerations for censor selection discussed.

  15. Empirical model for estimating the surface roughness of machined ...

    African Journals Online (AJOL)

    Empirical model for estimating the surface roughness of machined ... as well as surface finish is one of the most critical quality measure in mechanical products. ... various cutting speed have been developed using regression analysis software.

  16. Applicability of genetic algorithms to parameter estimation of economic models

    Directory of Open Access Journals (Sweden)

    Marcel Ševela

    2004-01-01

    Full Text Available The paper concentrates on capability of genetic algorithms for parameter estimation of non-linear economic models. In the paper we test the ability of genetic algorithms to estimate of parameters of demand function for durable goods and simultaneously search for parameters of genetic algorithm that lead to maximum effectiveness of the computation algorithm. The genetic algorithms connect deterministic iterative computation methods with stochastic methods. In the genteic aůgorithm approach each possible solution is represented by one individual, those life and lifes of all generations of individuals run under a few parameter of genetic algorithm. Our simulations resulted in optimal mutation rate of 15% of all bits in chromosomes, optimal elitism rate 20%. We can not set the optimal extend of generation, because it proves positive correlation with effectiveness of genetic algorithm in all range under research, but its impact is degreasing. The used genetic algorithm was sensitive to mutation rate at most, than to extend of generation. The sensitivity to elitism rate is not so strong.

  17. Context Tree Estimation in Variable Length Hidden Markov Models

    OpenAIRE

    Dumont, Thierry

    2011-01-01

    We address the issue of context tree estimation in variable length hidden Markov models. We propose an estimator of the context tree of the hidden Markov process which needs no prior upper bound on the depth of the context tree. We prove that the estimator is strongly consistent. This uses information-theoretic mixture inequalities in the spirit of Finesso and Lorenzo(Consistent estimation of the order for Markov and hidden Markov chains(1990)) and E.Gassiat and S.Boucheron (Optimal error exp...

  18. Estimation of Model's Marginal likelihood Using Adaptive Sparse Grid Surrogates in Bayesian Model Averaging

    Science.gov (United States)

    Zeng, X.

    2015-12-01

    A large number of model executions are required to obtain alternative conceptual models' predictions and their posterior probabilities in Bayesian model averaging (BMA). The posterior model probability is estimated through models' marginal likelihood and prior probability. The heavy computation burden hinders the implementation of BMA prediction, especially for the elaborated marginal likelihood estimator. For overcoming the computation burden of BMA, an adaptive sparse grid (SG) stochastic collocation method is used to build surrogates for alternative conceptual models through the numerical experiment of a synthetical groundwater model. BMA predictions depend on model posterior weights (or marginal likelihoods), and this study also evaluated four marginal likelihood estimators, including arithmetic mean estimator (AME), harmonic mean estimator (HME), stabilized harmonic mean estimator (SHME), and thermodynamic integration estimator (TIE). The results demonstrate that TIE is accurate in estimating conceptual models' marginal likelihoods. The BMA-TIE has better predictive performance than other BMA predictions. TIE has high stability for estimating conceptual model's marginal likelihood. The repeated estimated conceptual model's marginal likelihoods by TIE have significant less variability than that estimated by other estimators. In addition, the SG surrogates are efficient to facilitate BMA predictions, especially for BMA-TIE. The number of model executions needed for building surrogates is 4.13%, 6.89%, 3.44%, and 0.43% of the required model executions of BMA-AME, BMA-HME, BMA-SHME, and BMA-TIE, respectively.

  19. Optimal difference-based estimation for partially linear models

    KAUST Repository

    Zhou, Yuejin; Cheng, Yebin; Dai, Wenlin; Tong, Tiejun

    2017-01-01

    Difference-based methods have attracted increasing attention for analyzing partially linear models in the recent literature. In this paper, we first propose to solve the optimal sequence selection problem in difference-based estimation for the linear component. To achieve the goal, a family of new sequences and a cross-validation method for selecting the adaptive sequence are proposed. We demonstrate that the existing sequences are only extreme cases in the proposed family. Secondly, we propose a new estimator for the residual variance by fitting a linear regression method to some difference-based estimators. Our proposed estimator achieves the asymptotic optimal rate of mean squared error. Simulation studies also demonstrate that our proposed estimator performs better than the existing estimator, especially when the sample size is small and the nonparametric function is rough.

  20. Optimal difference-based estimation for partially linear models

    KAUST Repository

    Zhou, Yuejin

    2017-12-16

    Difference-based methods have attracted increasing attention for analyzing partially linear models in the recent literature. In this paper, we first propose to solve the optimal sequence selection problem in difference-based estimation for the linear component. To achieve the goal, a family of new sequences and a cross-validation method for selecting the adaptive sequence are proposed. We demonstrate that the existing sequences are only extreme cases in the proposed family. Secondly, we propose a new estimator for the residual variance by fitting a linear regression method to some difference-based estimators. Our proposed estimator achieves the asymptotic optimal rate of mean squared error. Simulation studies also demonstrate that our proposed estimator performs better than the existing estimator, especially when the sample size is small and the nonparametric function is rough.

  1. Estimating Effects of Species Interactions on Populations of Endangered Species.

    Science.gov (United States)

    Roth, Tobias; Bühler, Christoph; Amrhein, Valentin

    2016-04-01

    Global change causes community composition to change considerably through time, with ever-new combinations of interacting species. To study the consequences of newly established species interactions, one available source of data could be observational surveys from biodiversity monitoring. However, approaches using observational data would need to account for niche differences between species and for imperfect detection of individuals. To estimate population sizes of interacting species, we extended N-mixture models that were developed to estimate true population sizes in single species. Simulations revealed that our model is able to disentangle direct effects of dominant on subordinate species from indirect effects of dominant species on detection probability of subordinate species. For illustration, we applied our model to data from a Swiss amphibian monitoring program and showed that sizes of expanding water frog populations were negatively related to population sizes of endangered yellow-bellied toads and common midwife toads and partly of natterjack toads. Unlike other studies that analyzed presence and absence of species, our model suggests that the spread of water frogs in Central Europe is one of the reasons for the decline of endangered toad species. Thus, studying population impacts of dominant species on population sizes of endangered species using data from biodiversity monitoring programs should help to inform conservation policy and to decide whether competing species should be subject to population management.

  2. A metabolism-based whole lake eutrophication model to estimate the magnitude and time scales of the effects of restoration in Upper Klamath Lake, south-central Oregon

    Science.gov (United States)

    Wherry, Susan A.; Wood, Tamara M.

    2018-04-27

    A whole lake eutrophication (WLE) model approach for phosphorus and cyanobacterial biomass in Upper Klamath Lake, south-central Oregon, is presented here. The model is a successor to a previous model developed to inform a Total Maximum Daily Load (TMDL) for phosphorus in the lake, but is based on net primary production (NPP), which can be calculated from dissolved oxygen, rather than scaling up a small-scale description of cyanobacterial growth and respiration rates. This phase 3 WLE model is a refinement of the proof-of-concept developed in phase 2, which was the first attempt to use NPP to simulate cyanobacteria in the TMDL model. The calibration of the calculated NPP WLE model was successful, with performance metrics indicating a good fit to calibration data, and the calculated NPP WLE model was able to simulate mid-season bloom decreases, a feature that previous models could not reproduce.In order to use the model to simulate future scenarios based on phosphorus load reduction, a multivariate regression model was created to simulate NPP as a function of the model state variables (phosphorus and chlorophyll a) and measured meteorological and temperature model inputs. The NPP time series was split into a low- and high-frequency component using wavelet analysis, and regression models were fit to the components separately, with moderate success.The regression models for NPP were incorporated in the WLE model, referred to as the “scenario” WLE (SWLE), and the fit statistics for phosphorus during the calibration period were mostly unchanged. The fit statistics for chlorophyll a, however, were degraded. These statistics are still an improvement over prior models, and indicate that the SWLE is appropriate for long-term predictions even though it misses some of the seasonal variations in chlorophyll a.The complete whole lake SWLE model, with multivariate regression to predict NPP, was used to make long-term simulations of the response to 10-, 20-, and 40-percent

  3. Maximum profile likelihood estimation of differential equation parameters through model based smoothing state estimates.

    Science.gov (United States)

    Campbell, D A; Chkrebtii, O

    2013-12-01

    Statistical inference for biochemical models often faces a variety of characteristic challenges. In this paper we examine state and parameter estimation for the JAK-STAT intracellular signalling mechanism, which exemplifies the implementation intricacies common in many biochemical inference problems. We introduce an extension to the Generalized Smoothing approach for estimating delay differential equation models, addressing selection of complexity parameters, choice of the basis system, and appropriate optimization strategies. Motivated by the JAK-STAT system, we further extend the generalized smoothing approach to consider a nonlinear observation process with additional unknown parameters, and highlight how the approach handles unobserved states and unevenly spaced observations. The methodology developed is generally applicable to problems of estimation for differential equation models with delays, unobserved states, nonlinear observation processes, and partially observed histories. Crown Copyright © 2013. Published by Elsevier Inc. All rights reserved.

  4. Estimating the effect of treatment rate changes when treatment benefits are heterogeneous: antibiotics and otitis media.

    Science.gov (United States)

    Park, Tae-Ryong; Brooks, John M; Chrischilles, Elizabeth A; Bergus, George

    2008-01-01

    Contrast methods to assess the health effects of a treatment rate change when treatment benefits are heterogeneous across patients. Antibiotic prescribing for children with otitis media (OM) in Iowa Medicaid is the empirical example. Instrumental variable (IV) and linear probability model (LPM) are used to estimate the effect of antibiotic treatments on cure probabilities for children with OM in Iowa Medicaid. Local area physician supply per capita is the instrument in the IV models. Estimates are contrasted in terms of their ability to make inferences for patients whose treatment choices may be affected by a change in population treatment rates. The instrument was positively related to the probability of being prescribed an antibiotic. LPM estimates showed a positive effect of antibiotics on OM patient cure probability while IV estimates showed no relationship between antibiotics and patient cure probability. Linear probability model estimation yields the average effects of the treatment on patients that were treated. IV estimation yields the average effects for patients whose treatment choices were affected by the instrument. As antibiotic treatment effects are heterogeneous across OM patients, our estimates from these approaches are aligned with clinical evidence and theory. The average estimate for treated patients (higher severity) from the LPM model is greater than estimates for patients whose treatment choices are affected by the instrument (lower severity) from the IV models. Based on our IV estimates it appears that lowering antibiotic use in OM patients in Iowa Medicaid did not result in lost cures.

  5. Two-Level Designs to Estimate All Main Effects and Two-Factor Interactions

    NARCIS (Netherlands)

    Eendebak, P.T.; Schoen, E.D.

    2017-01-01

    We study the design of two-level experiments with N runs and n factors large enough to estimate the interaction model, which contains all the main effects and all the two-factor interactions. Yet, an effect hierarchy assumption suggests that main effect estimation should be given more prominence

  6. Bayesian estimation of parameters in a regional hydrological model

    Directory of Open Access Journals (Sweden)

    K. Engeland

    2002-01-01

    Full Text Available This study evaluates the applicability of the distributed, process-oriented Ecomag model for prediction of daily streamflow in ungauged basins. The Ecomag model is applied as a regional model to nine catchments in the NOPEX area, using Bayesian statistics to estimate the posterior distribution of the model parameters conditioned on the observed streamflow. The distribution is calculated by Markov Chain Monte Carlo (MCMC analysis. The Bayesian method requires formulation of a likelihood function for the parameters and three alternative formulations are used. The first is a subjectively chosen objective function that describes the goodness of fit between the simulated and observed streamflow, as defined in the GLUE framework. The second and third formulations are more statistically correct likelihood models that describe the simulation errors. The full statistical likelihood model describes the simulation errors as an AR(1 process, whereas the simple model excludes the auto-regressive part. The statistical parameters depend on the catchments and the hydrological processes and the statistical and the hydrological parameters are estimated simultaneously. The results show that the simple likelihood model gives the most robust parameter estimates. The simulation error may be explained to a large extent by the catchment characteristics and climatic conditions, so it is possible to transfer knowledge about them to ungauged catchments. The statistical models for the simulation errors indicate that structural errors in the model are more important than parameter uncertainties. Keywords: regional hydrological model, model uncertainty, Bayesian analysis, Markov Chain Monte Carlo analysis

  7. Bayesian Nonparametric Model for Estimating Multistate Travel Time Distribution

    Directory of Open Access Journals (Sweden)

    Emmanuel Kidando

    2017-01-01

    Full Text Available Multistate models, that is, models with more than two distributions, are preferred over single-state probability models in modeling the distribution of travel time. Literature review indicated that the finite multistate modeling of travel time using lognormal distribution is superior to other probability functions. In this study, we extend the finite multistate lognormal model of estimating the travel time distribution to unbounded lognormal distribution. In particular, a nonparametric Dirichlet Process Mixture Model (DPMM with stick-breaking process representation was used. The strength of the DPMM is that it can choose the number of components dynamically as part of the algorithm during parameter estimation. To reduce computational complexity, the modeling process was limited to a maximum of six components. Then, the Markov Chain Monte Carlo (MCMC sampling technique was employed to estimate the parameters’ posterior distribution. Speed data from nine links of a freeway corridor, aggregated on a 5-minute basis, were used to calculate the corridor travel time. The results demonstrated that this model offers significant flexibility in modeling to account for complex mixture distributions of the travel time without specifying the number of components. The DPMM modeling further revealed that freeway travel time is characterized by multistate or single-state models depending on the inclusion of onset and offset of congestion periods.

  8. On estimating the effective diffusive properties of hardened cement pastes

    International Nuclear Information System (INIS)

    Stora, E.; Bary, B.; Stora, E.; He, Qi-Chang

    2008-01-01

    The effective diffusion coefficients of hardened cement pastes can vary between a few orders of magnitude. The paper aims at building a homogenization model to estimate these macroscopic diffusivities and capture such strong variations. For this purpose, a three-scale description of the paste is proposed, relying mainly on the fact that the initial cement grains hydrate forming a complex microstructure with a multi-scale pore structure. In particular, porosity is found to be well connected at a fine scale. However, only a few homogenization schemes are shown to be adequate to account for such connectivity. Among them, the mixed composite spheres assemblage estimate (Stora, E., He, Q.-C., Bary, B.: J. Appl. Phys. 100(8), 084910, 2006a) seems to be the only one that always complies with rigorous bounds and is consequently employed to predict the effects of this fine porosity on the material effective diffusivities. The model proposed provides predictions in good agreement with experimental results and is consistent with the numerous measurements of critical pore diameters issued from mercury intrusion porosimetry tests. The evolution of the effective diffusivities of cement pastes subjected to leaching is also assessed by adopting a simplified scenario of the decalcification process. (authors)

  9. Estimating model parameters in nonautonomous chaotic systems using synchronization

    International Nuclear Information System (INIS)

    Yang, Xiaoli; Xu, Wei; Sun, Zhongkui

    2007-01-01

    In this Letter, a technique is addressed for estimating unknown model parameters of multivariate, in particular, nonautonomous chaotic systems from time series of state variables. This technique uses an adaptive strategy for tracking unknown parameters in addition to a linear feedback coupling for synchronizing systems, and then some general conditions, by means of the periodic version of the LaSalle invariance principle for differential equations, are analytically derived to ensure precise evaluation of unknown parameters and identical synchronization between the concerned experimental system and its corresponding receiver one. Exemplifies are presented by employing a parametrically excited 4D new oscillator and an additionally excited Ueda oscillator. The results of computer simulations reveal that the technique not only can quickly track the desired parameter values but also can rapidly respond to changes in operating parameters. In addition, the technique can be favorably robust against the effect of noise when the experimental system is corrupted by bounded disturbance and the normalized absolute error of parameter estimation grows almost linearly with the cutoff value of noise strength in simulation

  10. Hydrological model uncertainty due to spatial evapotranspiration estimation methods

    Science.gov (United States)

    Yu, Xuan; Lamačová, Anna; Duffy, Christopher; Krám, Pavel; Hruška, Jakub

    2016-05-01

    Evapotranspiration (ET) continues to be a difficult process to estimate in seasonal and long-term water balances in catchment models. Approaches to estimate ET typically use vegetation parameters (e.g., leaf area index [LAI], interception capacity) obtained from field observation, remote sensing data, national or global land cover products, and/or simulated by ecosystem models. In this study we attempt to quantify the uncertainty that spatial evapotranspiration estimation introduces into hydrological simulations when the age of the forest is not precisely known. The Penn State Integrated Hydrologic Model (PIHM) was implemented for the Lysina headwater catchment, located 50°03‧N, 12°40‧E in the western part of the Czech Republic. The spatial forest patterns were digitized from forest age maps made available by the Czech Forest Administration. Two ET methods were implemented in the catchment model: the Biome-BGC forest growth sub-model (1-way coupled to PIHM) and with the fixed-seasonal LAI method. From these two approaches simulation scenarios were developed. We combined the estimated spatial forest age maps and two ET estimation methods to drive PIHM. A set of spatial hydrologic regime and streamflow regime indices were calculated from the modeling results for each method. Intercomparison of the hydrological responses to the spatial vegetation patterns suggested considerable variation in soil moisture and recharge and a small uncertainty in the groundwater table elevation and streamflow. The hydrologic modeling with ET estimated by Biome-BGC generated less uncertainty due to the plant physiology-based method. The implication of this research is that overall hydrologic variability induced by uncertain management practices was reduced by implementing vegetation models in the catchment models.

  11. Estimation of the Thurstonian model for the 2-AC protocol

    DEFF Research Database (Denmark)

    Christensen, Rune Haubo Bojesen; Lee, Hye-Seong; Brockhoff, Per B.

    2012-01-01

    . This relationship makes it possible to extract estimates and standard errors of δ and τ from general statistical software, and furthermore, it makes it possible to combine standard regression modelling with the Thurstonian model for the 2-AC protocol. A model for replicated 2-AC data is proposed using cumulative......The 2-AC protocol is a 2-AFC protocol with a “no-difference” option and is technically identical to the paired preference test with a “no-preference” option. The Thurstonian model for the 2-AC protocol is parameterized by δ and a decision parameter τ, the estimates of which can be obtained...... by fairly simple well-known methods. In this paper we describe how standard errors of the parameters can be obtained and how exact power computations can be performed. We also show how the Thurstonian model for the 2-AC protocol is closely related to a statistical model known as a cumulative probit model...

  12. Bayesian analysis for uncertainty estimation of a canopy transpiration model

    Science.gov (United States)

    Samanta, S.; Mackay, D. S.; Clayton, M. K.; Kruger, E. L.; Ewers, B. E.

    2007-04-01

    A Bayesian approach was used to fit a conceptual transpiration model to half-hourly transpiration rates for a sugar maple (Acer saccharum) stand collected over a 5-month period and probabilistically estimate its parameter and prediction uncertainties. The model used the Penman-Monteith equation with the Jarvis model for canopy conductance. This deterministic model was extended by adding a normally distributed error term. This extension enabled using Markov chain Monte Carlo simulations to sample the posterior parameter distributions. The residuals revealed approximate conformance to the assumption of normally distributed errors. However, minor systematic structures in the residuals at fine timescales suggested model changes that would potentially improve the modeling of transpiration. Results also indicated considerable uncertainties in the parameter and transpiration estimates. This simple methodology of uncertainty analysis would facilitate the deductive step during the development cycle of deterministic conceptual models by accounting for these uncertainties while drawing inferences from data.

  13. Comparing interval estimates for small sample ordinal CFA models.

    Science.gov (United States)

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading

  14. [Using log-binomial model for estimating the prevalence ratio].

    Science.gov (United States)

    Ye, Rong; Gao, Yan-hui; Yang, Yi; Chen, Yue

    2010-05-01

    To estimate the prevalence ratios, using a log-binomial model with or without continuous covariates. Prevalence ratios for individuals' attitude towards smoking-ban legislation associated with smoking status, estimated by using a log-binomial model were compared with odds ratios estimated by logistic regression model. In the log-binomial modeling, maximum likelihood method was used when there were no continuous covariates and COPY approach was used if the model did not converge, for example due to the existence of continuous covariates. We examined the association between individuals' attitude towards smoking-ban legislation and smoking status in men and women. Prevalence ratio and odds ratio estimation provided similar results for the association in women since smoking was not common. In men however, the odds ratio estimates were markedly larger than the prevalence ratios due to a higher prevalence of outcome. The log-binomial model did not converge when age was included as a continuous covariate and COPY method was used to deal with the situation. All analysis was performed by SAS. Prevalence ratio seemed to better measure the association than odds ratio when prevalence is high. SAS programs were provided to calculate the prevalence ratios with or without continuous covariates in the log-binomial regression analysis.

  15. Bayesian parameter estimation for stochastic models of biological cell migration

    Science.gov (United States)

    Dieterich, Peter; Preuss, Roland

    2013-08-01

    Cell migration plays an essential role under many physiological and patho-physiological conditions. It is of major importance during embryonic development and wound healing. In contrast, it also generates negative effects during inflammation processes, the transmigration of tumors or the formation of metastases. Thus, a reliable quantification and characterization of cell paths could give insight into the dynamics of these processes. Typically stochastic models are applied where parameters are extracted by fitting models to the so-called mean square displacement of the observed cell group. We show that this approach has several disadvantages and problems. Therefore, we propose a simple procedure directly relying on the positions of the cell's trajectory and the covariance matrix of the positions. It is shown that the covariance is identical with the spatial aging correlation function for the supposed linear Gaussian models of Brownian motion with drift and fractional Brownian motion. The technique is applied and illustrated with simulated data showing a reliable parameter estimation from single cell paths.

  16. On the Estimation of Standard Errors in Cognitive Diagnosis Models

    Science.gov (United States)

    Philipp, Michel; Strobl, Carolin; de la Torre, Jimmy; Zeileis, Achim

    2018-01-01

    Cognitive diagnosis models (CDMs) are an increasingly popular method to assess mastery or nonmastery of a set of fine-grained abilities in educational or psychological assessments. Several inference techniques are available to quantify the uncertainty of model parameter estimates, to compare different versions of CDMs, or to check model…

  17. Estimation of pure autoregressive vector models for revenue series ...

    African Journals Online (AJOL)

    This paper aims at applying multivariate approach to Box and Jenkins univariate time series modeling to three vector series. General Autoregressive Vector Models with time varying coefficients are estimated. The first vector is a response vector, while others are predictor vectors. By matrix expansion each vector, whether ...

  18. Vacuum expectation values for four-fermion operators. Model estimates

    International Nuclear Information System (INIS)

    Zhitnitskij, A.R.

    1985-01-01

    Some simple models (a system with a heavy quark, the rarefied insatanton gas) are used to investigate the problem of factorizability. Characteristics of vacuum fluctuations responsible for saturation of four-fermion vacuum expectation values which are known phenomenologically are discussed. A qualitative agreement between the model and phenomenologic;l estimates has been noted

  19. Vacuum expectation values of four-fermion operators. Model estimates

    International Nuclear Information System (INIS)

    Zhitnitskii, A.R.

    1985-01-01

    Simple models (a system with a heavy quark, a rarefied instanton gas) are used to study problems of factorizability. A discussion is given of the characteristics of the vacuum fluctuations responsible for saturation of the phenomenologically known four-fermion vacuum expectation values. Qualitative agreement between the model and phenomenological estimates is observed

  20. Estimation of pump operational state with model-based methods

    International Nuclear Information System (INIS)

    Ahonen, Tero; Tamminen, Jussi; Ahola, Jero; Viholainen, Juha; Aranto, Niina; Kestilae, Juha

    2010-01-01

    Pumps are widely used in industry, and they account for 20% of the industrial electricity consumption. Since the speed variation is often the most energy-efficient method to control the head and flow rate of a centrifugal pump, frequency converters are used with induction motor-driven pumps. Although a frequency converter can estimate the operational state of an induction motor without external measurements, the state of a centrifugal pump or other load machine is not typically considered. The pump is, however, usually controlled on the basis of the required flow rate or output pressure. As the pump operational state can be estimated with a general model having adjustable parameters, external flow rate or pressure measurements are not necessary to determine the pump flow rate or output pressure. Hence, external measurements could be replaced with an adjustable model for the pump that uses estimates of the motor operational state. Besides control purposes, modelling the pump operation can provide useful information for energy auditing and optimization purposes. In this paper, two model-based methods for pump operation estimation are presented. Factors affecting the accuracy of the estimation methods are analyzed. The applicability of the methods is verified by laboratory measurements and tests in two pilot installations. Test results indicate that the estimation methods can be applied to the analysis and control of pump operation. The accuracy of the methods is sufficient for auditing purposes, and the methods can inform the user if the pump is driven inefficiently.

  1. Simplification of an MCNP model designed for dose rate estimation

    Science.gov (United States)

    Laptev, Alexander; Perry, Robert

    2017-09-01

    A study was made to investigate the methods of building a simplified MCNP model for radiological dose estimation. The research was done using an example of a complicated glovebox with extra shielding. The paper presents several different calculations for neutron and photon dose evaluations where glovebox elements were consecutively excluded from the MCNP model. The analysis indicated that to obtain a fast and reasonable estimation of dose, the model should be realistic in details that are close to the tally. Other details may be omitted.

  2. Simplification of an MCNP model designed for dose rate estimation

    Directory of Open Access Journals (Sweden)

    Laptev Alexander

    2017-01-01

    Full Text Available A study was made to investigate the methods of building a simplified MCNP model for radiological dose estimation. The research was done using an example of a complicated glovebox with extra shielding. The paper presents several different calculations for neutron and photon dose evaluations where glovebox elements were consecutively excluded from the MCNP model. The analysis indicated that to obtain a fast and reasonable estimation of dose, the model should be realistic in details that are close to the tally. Other details may be omitted.

  3. Improved air ventilation rate estimation based on a statistical model

    International Nuclear Information System (INIS)

    Brabec, M.; Jilek, K.

    2004-01-01

    A new approach to air ventilation rate estimation from CO measurement data is presented. The approach is based on a state-space dynamic statistical model, allowing for quick and efficient estimation. Underlying computations are based on Kalman filtering, whose practical software implementation is rather easy. The key property is the flexibility of the model, allowing various artificial regimens of CO level manipulation to be treated. The model is semi-parametric in nature and can efficiently handle time-varying ventilation rate. This is a major advantage, compared to some of the methods which are currently in practical use. After a formal introduction of the statistical model, its performance is demonstrated on real data from routine measurements. It is shown how the approach can be utilized in a more complex situation of major practical relevance, when time-varying air ventilation rate and radon entry rate are to be estimated simultaneously from concurrent radon and CO measurements

  4. Evaluation of Rock Stress Estimation by the Kaiser effect

    International Nuclear Information System (INIS)

    Lehtonen, A.

    2005-11-01

    The knowledge of in situ stress is the key input parameter in many rock mechanics analyses. Information on stress allows the definition of boundary conditions for various modelling and engineering tasks. Presently, the estimation of stresses in bedrock is one of the most difficult, time-consuming and high-priced rock mechanical investigations. In addition, the methods used today have not evolved significantly in many years. This brings out a demand for novel, more economical and practical methods for stress estimation. In this study, one such method, Kaiser effect based on acoustic emission of core samples, has been evaluated. It can be described as a 'memory' in rock that is indicated by a change in acoustic emission emitted during uniaxial loading test. The most tempting feature of this method is the ability to estimate the in situ stress state from core specimens in laboratory conditions. This yields considerable cost savings compared to laborious borehole measurements. Kaiser effect has been studied in order to determine in situ stresses for decades without any major success. However, recent studies in Australia and China have been promising and made the estimation of stress tensor possible from differently oriented core samples. The aim of this work has been to develop a similar estimation method in Finland (including both equipment and data reduction), and to test it on samples obtained from Olkiluoto, Eurajoki. The developed measuring system proved to work well. The quality of obtained data varied, but they were still interpretable. The results obtained from these tests were compared with results of previous overcoring measurements, and they showed quite good correlation. Thus, the results were promising, but the method still needs further development and more testing before the final decision on its feasibility can be made. (orig.)

  5. Spatial Distribution of Hydrologic Ecosystem Service Estimates: Comparing Two Models

    Science.gov (United States)

    Dennedy-Frank, P. J.; Ghile, Y.; Gorelick, S.; Logsdon, R. A.; Chaubey, I.; Ziv, G.

    2014-12-01

    We compare estimates of the spatial distribution of water quantity provided (annual water yield) from two ecohydrologic models: the widely-used Soil and Water Assessment Tool (SWAT) and the much simpler water models from the Integrated Valuation of Ecosystem Services and Tradeoffs (InVEST) toolbox. These two models differ significantly in terms of complexity, timescale of operation, effort, and data required for calibration, and so are often used in different management contexts. We compare two study sites in the US: the Wildcat Creek Watershed (2083 km2) in Indiana, a largely agricultural watershed in a cold aseasonal climate, and the Upper Upatoi Creek Watershed (876 km2) in Georgia, a mostly forested watershed in a temperate aseasonal climate. We evaluate (1) quantitative estimates of water yield to explore how well each model represents this process, and (2) ranked estimates of water yield to indicate how useful the models are for management purposes where other social and financial factors may play significant roles. The SWAT and InVEST models provide very similar estimates of the water yield of individual subbasins in the Wildcat Creek Watershed (Pearson r = 0.92, slope = 0.89), and a similar ranking of the relative water yield of those subbasins (Spearman r = 0.86). However, the two models provide relatively different estimates of the water yield of individual subbasins in the Upper Upatoi Watershed (Pearson r = 0.25, slope = 0.14), and very different ranking of the relative water yield of those subbasins (Spearman r = -0.10). The Upper Upatoi watershed has a significant baseflow contribution due to its sandy, well-drained soils. InVEST's simple seasonality terms, which assume no change in storage over the time of the model run, may not accurately estimate water yield processes when baseflow provides such a strong contribution. Our results suggest that InVEST users take care in situations where storage changes are significant.

  6. A financial planning model for estimating hospital debt capacity.

    Science.gov (United States)

    Hopkins, D S; Heath, D; Levin, P J

    1982-01-01

    A computer-based financial planning model was formulated to measure the impact of a major capital improvement project on the fiscal health of Stanford University Hospital. The model had to be responsive to many variables and easy to use, so as to allow for the testing of numerous alternatives. Special efforts were made to identify the key variables that needed to be presented in the model and to include all known links between capital investment, debt, and hospital operating expenses. Growth in the number of patient days of care was singled out as a major source of uncertainty that would have profound effects on the hospital's finances. Therefore this variable was subjected to special scrutiny in terms of efforts to gauge expected demographic trends and market forces. In addition, alternative base runs of the model were made under three distinct patient-demand assumptions. Use of the model enabled planners at the Stanford University Hospital (a) to determine that a proposed modernization plan was financially feasible under a reasonable (that is, not unduly optimistic) set of assumptions and (b) to examine the major sources of risk. Other than patient demand, these sources were found to be gross revenues per patient, operating costs, and future limitations on government reimbursement programs. When the likely financial consequences of these risks were estimated, both separately and in combination, it was determined that even if two or more assumptions took a somewhat more negative turn than was expected, the hospital would be able to offset adverse consequences by a relatively minor reduction in operating costs. PMID:7111658

  7. Unemployment estimation: Spatial point referenced methods and models

    KAUST Repository

    Pereira, Soraia

    2017-06-26

    Portuguese Labor force survey, from 4th quarter of 2014 onwards, started geo-referencing the sampling units, namely the dwellings in which the surveys are carried. This opens new possibilities in analysing and estimating unemployment and its spatial distribution across any region. The labor force survey choose, according to an preestablished sampling criteria, a certain number of dwellings across the nation and survey the number of unemployed in these dwellings. Based on this survey, the National Statistical Institute of Portugal presently uses direct estimation methods to estimate the national unemployment figures. Recently, there has been increased interest in estimating these figures in smaller areas. Direct estimation methods, due to reduced sampling sizes in small areas, tend to produce fairly large sampling variations therefore model based methods, which tend to

  8. Parameter Estimation in Stochastic Grey-Box Models

    DEFF Research Database (Denmark)

    Kristensen, Niels Rode; Madsen, Henrik; Jørgensen, Sten Bay

    2004-01-01

    An efficient and flexible parameter estimation scheme for grey-box models in the sense of discretely, partially observed Ito stochastic differential equations with measurement noise is presented along with a corresponding software implementation. The estimation scheme is based on the extended...... Kalman filter and features maximum likelihood as well as maximum a posteriori estimation on multiple independent data sets, including irregularly sampled data sets and data sets with occasional outliers and missing observations. The software implementation is compared to an existing software tool...... and proves to have better performance both in terms of quality of estimates for nonlinear systems with significant diffusion and in terms of reproducibility. In particular, the new tool provides more accurate and more consistent estimates of the parameters of the diffusion term....

  9. Avoiding Boundary Estimates in Hierarchical Linear Models through Weakly Informative Priors

    Science.gov (United States)

    Chung, Yeojin; Rabe-Hesketh, Sophia; Gelman, Andrew; Dorie, Vincent; Liu, Jinchen

    2012-01-01

    Hierarchical or multilevel linear models are widely used for longitudinal or cross-sectional data on students nested in classes and schools, and are particularly important for estimating treatment effects in cluster-randomized trials, multi-site trials, and meta-analyses. The models can allow for variation in treatment effects, as well as…

  10. Best estimate radiation heat transfer model developed for TRAC-BD1

    International Nuclear Information System (INIS)

    Spore, J.W.; Giles, M.M.; Shumway, R.W.

    1981-01-01

    A best estimate radiation heat transfer model for analysis of BWR fuel bundles has been developed and compared with 8 x 8 fuel bundle data. The model includes surface-to-surface and surface-to-two-phase fluid radiation heat transfer. A simple method of correcting for anisotropic reflection effects has been included in the model

  11. A Bayesian framework for parameter estimation in dynamical models.

    Directory of Open Access Journals (Sweden)

    Flávio Codeço Coelho

    Full Text Available Mathematical models in biology are powerful tools for the study and exploration of complex dynamics. Nevertheless, bringing theoretical results to an agreement with experimental observations involves acknowledging a great deal of uncertainty intrinsic to our theoretical representation of a real system. Proper handling of such uncertainties is key to the successful usage of models to predict experimental or field observations. This problem has been addressed over the years by many tools for model calibration and parameter estimation. In this article we present a general framework for uncertainty analysis and parameter estimation that is designed to handle uncertainties associated with the modeling of dynamic biological systems while remaining agnostic as to the type of model used. We apply the framework to fit an SIR-like influenza transmission model to 7 years of incidence data in three European countries: Belgium, the Netherlands and Portugal.

  12. Bayesian hierarchical model for large-scale covariance matrix estimation.

    Science.gov (United States)

    Zhu, Dongxiao; Hero, Alfred O

    2007-12-01

    Many bioinformatics problems implicitly depend on estimating large-scale covariance matrix. The traditional approaches tend to give rise to high variance and low accuracy due to "overfitting." We cast the large-scale covariance matrix estimation problem into the Bayesian hierarchical model framework, and introduce dependency between covariance parameters. We demonstrate the advantages of our approaches over the traditional approaches using simulations and OMICS data analysis.

  13. Estimation and variable selection for generalized additive partial linear models

    KAUST Repository

    Wang, Li

    2011-08-01

    We study generalized additive partial linear models, proposing the use of polynomial spline smoothing for estimation of nonparametric functions, and deriving quasi-likelihood based estimators for the linear parameters. We establish asymptotic normality for the estimators of the parametric components. The procedure avoids solving large systems of equations as in kernel-based procedures and thus results in gains in computational simplicity. We further develop a class of variable selection procedures for the linear parameters by employing a nonconcave penalized quasi-likelihood, which is shown to have an asymptotic oracle property. Monte Carlo simulations and an empirical example are presented for illustration. © Institute of Mathematical Statistics, 2011.

  14. Line impedance estimation using model based identification technique

    DEFF Research Database (Denmark)

    Ciobotaru, Mihai; Agelidis, Vassilios; Teodorescu, Remus

    2011-01-01

    The estimation of the line impedance can be used by the control of numerous grid-connected systems, such as active filters, islanding detection techniques, non-linear current controllers, detection of the on/off grid operation mode. Therefore, estimating the line impedance can add extra functions...... into the operation of the grid-connected power converters. This paper describes a quasi passive method for estimating the line impedance of the distribution electricity network. The method uses the model based identification technique to obtain the resistive and inductive parts of the line impedance. The quasi...

  15. Model Year 2017 Fuel Economy Guide: EPA Fuel Economy Estimates

    Energy Technology Data Exchange (ETDEWEB)

    None

    2016-11-01

    The Fuel Economy Guide is published by the U.S. Department of Energy as an aid to consumers considering the purchase of a new vehicle. The Guide lists estimates of miles per gallon (mpg) for each vehicle available for the new model year. These estimates are provided by the U.S. Environmental Protection Agency in compliance with Federal Law. By using this Guide, consumers can estimate the average yearly fuel cost for any vehicle. The Guide is intended to help consumers compare the fuel economy of similarly sized cars, light duty trucks and special purpose vehicles.

  16. Model Year 2012 Fuel Economy Guide: EPA Fuel Economy Estimates

    Energy Technology Data Exchange (ETDEWEB)

    None

    2011-11-01

    The Fuel Economy Guide is published by the U.S. Department of Energy as an aid to consumers considering the purchase of a new vehicle. The Guide lists estimates of miles per gallon (mpg) for each vehicle available for the new model year. These estimates are provided by the U.S. Environmental Protection Agency in compliance with Federal Law. By using this Guide, consumers can estimate the average yearly fuel cost for any vehicle. The Guide is intended to help consumers compare the fuel economy of similarly sized cars, light duty trucks and special purpose vehicles.

  17. Model Year 2013 Fuel Economy Guide: EPA Fuel Economy Estimates

    Energy Technology Data Exchange (ETDEWEB)

    None

    2012-12-01

    The Fuel Economy Guide is published by the U.S. Department of Energy as an aid to consumers considering the purchase of a new vehicle. The Guide lists estimates of miles per gallon (mpg) for each vehicle available for the new model year. These estimates are provided by the U.S. Environmental Protection Agency in compliance with Federal Law. By using this Guide, consumers can estimate the average yearly fuel cost for any vehicle. The Guide is intended to help consumers compare the fuel economy of similarly sized cars, light duty trucks and special purpose vehicles.

  18. Model Year 2011 Fuel Economy Guide: EPA Fuel Economy Estimates

    Energy Technology Data Exchange (ETDEWEB)

    None

    2010-11-01

    The Fuel Economy Guide is published by the U.S. Department of Energy as an aid to consumers considering the purchase of a new vehicle. The Guide lists estimates of miles per gallon (mpg) for each vehicle available for the new model year. These estimates are provided by the U.S. Environmental Protection Agency in compliance with Federal Law. By using this Guide, consumers can estimate the average yearly fuel cost for any vehicle. The Guide is intended to help consumers compare the fuel economy of similarly sized cars, light duty trucks and special purpose vehicles.

  19. Model Year 2018 Fuel Economy Guide: EPA Fuel Economy Estimates

    Energy Technology Data Exchange (ETDEWEB)

    None

    2017-12-07

    The Fuel Economy Guide is published by the U.S. Department of Energy as an aid to consumers considering the purchase of a new vehicle. The Guide lists estimates of miles per gallon (mpg) for each vehicle available for the new model year. These estimates are provided by the U.S. Environmental Protection Agency in compliance with Federal Law. By using this Guide, consumers can estimate the average yearly fuel cost for any vehicle. The Guide is intended to help consumers compare the fuel economy of similarly sized cars, light duty trucks and special purpose vehicles.

  20. Models of economic geography: dynamics, estimation and policy evaluation

    OpenAIRE

    Knaap, Thijs

    2004-01-01

    In this thesis we look at economic geography models from a number of angles. We started by placing the theory in a context of preceding theories, both earlier work on spatial economics and other children of the monopolistic competition ‘revolution.’ Next, we looked at the theoretical properties of these models, especially when we allow firms to have different demand functions for intermediate goods. We estimated the model using a dataset on US states, and computed a number of counterfactuals....

  1. System Estimation of Panel Data Models under Long-Range Dependence

    DEFF Research Database (Denmark)

    Ergemen, Yunus Emre

    A general dynamic panel data model is considered that incorporates individual and interactive fixed effects allowing for contemporaneous correlation in model innovations. The model accommodates general stationary or nonstationary long-range dependence through interactive fixed effects...... and innovations, removing the necessity to perform a priori unit-root or stationarity testing. Moreover, persistence in innovations and interactive fixed effects allows for cointegration; innovations can also have vector-autoregressive dynamics; deterministic trends can be featured. Estimations are performed...

  2. Input-output model for MACCS nuclear accident impacts estimation¹

    Energy Technology Data Exchange (ETDEWEB)

    Outkin, Alexander V. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bixler, Nathan E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Vargas, Vanessa N [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-01-27

    Since the original economic model for MACCS was developed, better quality economic data (as well as the tools to gather and process it) and better computational capabilities have become available. The update of the economic impacts component of the MACCS legacy model will provide improved estimates of business disruptions through the use of Input-Output based economic impact estimation. This paper presents an updated MACCS model, bases on Input-Output methodology, in which economic impacts are calculated using the Regional Economic Accounting analysis tool (REAcct) created at Sandia National Laboratories. This new GDP-based model allows quick and consistent estimation of gross domestic product (GDP) losses due to nuclear power plant accidents. This paper outlines the steps taken to combine the REAcct Input-Output-based model with the MACCS code, describes the GDP loss calculation, and discusses the parameters and modeling assumptions necessary for the estimation of long-term effects of nuclear power plant accidents.

  3. Estimating Model Prediction Error: Should You Treat Predictions as Fixed or Random?

    Science.gov (United States)

    Wallach, Daniel; Thorburn, Peter; Asseng, Senthold; Challinor, Andrew J.; Ewert, Frank; Jones, James W.; Rotter, Reimund; Ruane, Alexander

    2016-01-01

    Crop models are important tools for impact assessment of climate change, as well as for exploring management options under current climate. It is essential to evaluate the uncertainty associated with predictions of these models. We compare two criteria of prediction error; MSEP fixed, which evaluates mean squared error of prediction for a model with fixed structure, parameters and inputs, and MSEP uncertain( X), which evaluates mean squared error averaged over the distributions of model structure, inputs and parameters. Comparison of model outputs with data can be used to estimate the former. The latter has a squared bias term, which can be estimated using hindcasts, and a model variance term, which can be estimated from a simulation experiment. The separate contributions to MSEP uncertain (X) can be estimated using a random effects ANOVA. It is argued that MSEP uncertain (X) is the more informative uncertainty criterion, because it is specific to each prediction situation.

  4. A single model procedure for estimating tank calibration equations

    International Nuclear Information System (INIS)

    Liebetrau, A.M.

    1997-10-01

    A fundamental component of any accountability system for nuclear materials is a tank calibration equation that relates the height of liquid in a tank to its volume. Tank volume calibration equations are typically determined from pairs of height and volume measurements taken in a series of calibration runs. After raw calibration data are standardized to a fixed set of reference conditions, the calibration equation is typically fit by dividing the data into several segments--corresponding to regions in the tank--and independently fitting the data for each segment. The estimates obtained for individual segments must then be combined to obtain an estimate of the entire calibration function. This process is tedious and time-consuming. Moreover, uncertainty estimates may be misleading because it is difficult to properly model run-to-run variability and between-segment correlation. In this paper, the authors describe a model whose parameters can be estimated simultaneously for all segments of the calibration data, thereby eliminating the need for segment-by-segment estimation. The essence of the proposed model is to define a suitable polynomial to fit to each segment and then extend its definition to the domain of the entire calibration function, so that it (the entire calibration function) can be expressed as the sum of these extended polynomials. The model provides defensible estimates of between-run variability and yields a proper treatment of between-segment correlations. A portable software package, called TANCS, has been developed to facilitate the acquisition, standardization, and analysis of tank calibration data. The TANCS package was used for the calculations in an example presented to illustrate the unified modeling approach described in this paper. With TANCS, a trial calibration function can be estimated and evaluated in a matter of minutes

  5. Estimating Model Probabilities using Thermodynamic Markov Chain Monte Carlo Methods

    Science.gov (United States)

    Ye, M.; Liu, P.; Beerli, P.; Lu, D.; Hill, M. C.

    2014-12-01

    Markov chain Monte Carlo (MCMC) methods are widely used to evaluate model probability for quantifying model uncertainty. In a general procedure, MCMC simulations are first conducted for each individual model, and MCMC parameter samples are then used to approximate marginal likelihood of the model by calculating the geometric mean of the joint likelihood of the model and its parameters. It has been found the method of evaluating geometric mean suffers from the numerical problem of low convergence rate. A simple test case shows that even millions of MCMC samples are insufficient to yield accurate estimation of the marginal likelihood. To resolve this problem, a thermodynamic method is used to have multiple MCMC runs with different values of a heating coefficient between zero and one. When the heating coefficient is zero, the MCMC run is equivalent to a random walk MC in the prior parameter space; when the heating coefficient is one, the MCMC run is the conventional one. For a simple case with analytical form of the marginal likelihood, the thermodynamic method yields more accurate estimate than the method of using geometric mean. This is also demonstrated for a case of groundwater modeling with consideration of four alternative models postulated based on different conceptualization of a confining layer. This groundwater example shows that model probabilities estimated using the thermodynamic method are more reasonable than those obtained using the geometric method. The thermodynamic method is general, and can be used for a wide range of environmental problem for model uncertainty quantification.

  6. Vision-based stress estimation model for steel frame structures with rigid links

    Science.gov (United States)

    Park, Hyo Seon; Park, Jun Su; Oh, Byung Kwan

    2017-07-01

    This paper presents a stress estimation model for the safety evaluation of steel frame structures with rigid links using a vision-based monitoring system. In this model, the deformed shape of a structure under external loads is estimated via displacements measured by a motion capture system (MCS), which is a non-contact displacement measurement device. During the estimation of the deformed shape, the effective lengths of the rigid link ranges in the frame structure are identified. The radius of the curvature of the structural member to be monitored is calculated using the estimated deformed shape and is employed to estimate stress. Using MCS in the presented model, the safety of a structure can be assessed gauge-freely. In addition, because the stress is directly extracted from the radius of the curvature obtained from the measured deformed shape, information on the loadings and boundary conditions of the structure are not required. Furthermore, the model, which includes the identification of the effective lengths of the rigid links, can consider the influences of the stiffness of the connection and support on the deformation in the stress estimation. To verify the applicability of the presented model, static loading tests for a steel frame specimen were conducted. By comparing the stress estimated by the model with the measured stress, the validity of the model was confirmed.

  7. Efficient and robust estimation for longitudinal mixed models for binary data

    DEFF Research Database (Denmark)

    Holst, René

    2009-01-01

    This paper proposes a longitudinal mixed model for binary data. The model extends the classical Poisson trick, in which a binomial regression is fitted by switching to a Poisson framework. A recent estimating equations method for generalized linear longitudinal mixed models, called GEEP, is used...... as a vehicle for fitting the conditional Poisson regressions, given a latent process of serial correlated Tweedie variables. The regression parameters are estimated using a quasi-score method, whereas the dispersion and correlation parameters are estimated by use of bias-corrected Pearson-type estimating...... equations, using second moments only. Random effects are predicted by BLUPs. The method provides a computationally efficient and robust approach to the estimation of longitudinal clustered binary data and accommodates linear and non-linear models. A simulation study is used for validation and finally...

  8. Correcting the bias of empirical frequency parameter estimators in codon models.

    Directory of Open Access Journals (Sweden)

    Sergei Kosakovsky Pond

    2010-07-01

    Full Text Available Markov models of codon substitution are powerful inferential tools for studying biological processes such as natural selection and preferences in amino acid substitution. The equilibrium character distributions of these models are almost always estimated using nucleotide frequencies observed in a sequence alignment, primarily as a matter of historical convention. In this note, we demonstrate that a popular class of such estimators are biased, and that this bias has an adverse effect on goodness of fit and estimates of substitution rates. We propose a "corrected" empirical estimator that begins with observed nucleotide counts, but accounts for the nucleotide composition of stop codons. We show via simulation that the corrected estimates outperform the de facto standard estimates not just by providing better estimates of the frequencies themselves, but also by leading to improved estimation of other parameters in the evolutionary models. On a curated collection of sequence alignments, our estimators show a significant improvement in goodness of fit compared to the approach. Maximum likelihood estimation of the frequency parameters appears to be warranted in many cases, albeit at a greater computational cost. Our results demonstrate that there is little justification, either statistical or computational, for continued use of the -style estimators.

  9. A distributed approach for parameters estimation in System Biology models

    International Nuclear Information System (INIS)

    Mosca, E.; Merelli, I.; Alfieri, R.; Milanesi, L.

    2009-01-01

    Due to the lack of experimental measurements, biological variability and experimental errors, the value of many parameters of the systems biology mathematical models is yet unknown or uncertain. A possible computational solution is the parameter estimation, that is the identification of the parameter values that determine the best model fitting respect to experimental data. We have developed an environment to distribute each run of the parameter estimation algorithm on a different computational resource. The key feature of the implementation is a relational database that allows the user to swap the candidate solutions among the working nodes during the computations. The comparison of the distributed implementation with the parallel one showed that the presented approach enables a faster and better parameter estimation of systems biology models.

  10. Correlation between the model accuracy and model-based SOC estimation

    International Nuclear Information System (INIS)

    Wang, Qianqian; Wang, Jiao; Zhao, Pengju; Kang, Jianqiang; Yan, Few; Du, Changqing

    2017-01-01

    State-of-charge (SOC) estimation is a core technology for battery management systems. Considerable progress has been achieved in the study of SOC estimation algorithms, especially the algorithm on the basis of Kalman filter to meet the increasing demand of model-based battery management systems. The Kalman filter weakens the influence of white noise and initial error during SOC estimation but cannot eliminate the existing error of the battery model itself. As such, the accuracy of SOC estimation is directly related to the accuracy of the battery model. Thus far, the quantitative relationship between model accuracy and model-based SOC estimation remains unknown. This study summarizes three equivalent circuit lithium-ion battery models, namely, Thevenin, PNGV, and DP models. The model parameters are identified through hybrid pulse power characterization test. The three models are evaluated, and SOC estimation conducted by EKF-Ah method under three operating conditions are quantitatively studied. The regression and correlation of the standard deviation and normalized RMSE are studied and compared between the model error and the SOC estimation error. These parameters exhibit a strong linear relationship. Results indicate that the model accuracy affects the SOC estimation accuracy mainly in two ways: dispersion of the frequency distribution of the error and the overall level of the error. On the basis of the relationship between model error and SOC estimation error, our study provides a strategy for selecting a suitable cell model to meet the requirements of SOC precision using Kalman filter.

  11. Synchronous Generator Model Parameter Estimation Based on Noisy Dynamic Waveforms

    Science.gov (United States)

    Berhausen, Sebastian; Paszek, Stefan

    2016-01-01

    In recent years, there have occurred system failures in many power systems all over the world. They have resulted in a lack of power supply to a large number of recipients. To minimize the risk of occurrence of power failures, it is necessary to perform multivariate investigations, including simulations, of power system operating conditions. To conduct reliable simulations, the current base of parameters of the models of generating units, containing the models of synchronous generators, is necessary. In the paper, there is presented a method for parameter estimation of a synchronous generator nonlinear model based on the analysis of selected transient waveforms caused by introducing a disturbance (in the form of a pseudorandom signal) in the generator voltage regulation channel. The parameter estimation was performed by minimizing the objective function defined as a mean square error for deviations between the measurement waveforms and the waveforms calculated based on the generator mathematical model. A hybrid algorithm was used for the minimization of the objective function. In the paper, there is described a filter system used for filtering the noisy measurement waveforms. The calculation results of the model of a 44 kW synchronous generator installed on a laboratory stand of the Institute of Electrical Engineering and Computer Science of the Silesian University of Technology are also given. The presented estimation method can be successfully applied to parameter estimation of different models of high-power synchronous generators operating in a power system.

  12. Estimating and Forecasting Generalized Fractional Long Memory Stochastic Volatility Models

    Directory of Open Access Journals (Sweden)

    Shelton Peiris

    2017-12-01

    Full Text Available This paper considers a flexible class of time series models generated by Gegenbauer polynomials incorporating the long memory in stochastic volatility (SV components in order to develop the General Long Memory SV (GLMSV model. We examine the corresponding statistical properties of this model, discuss the spectral likelihood estimation and investigate the finite sample properties via Monte Carlo experiments. We provide empirical evidence by applying the GLMSV model to three exchange rate return series and conjecture that the results of out-of-sample forecasts adequately confirm the use of GLMSV model in certain financial applications.

  13. Model-Based Estimation of Ankle Joint Stiffness.

    Science.gov (United States)

    Misgeld, Berno J E; Zhang, Tony; Lüken, Markus J; Leonhardt, Steffen

    2017-03-29

    We address the estimation of biomechanical parameters with wearable measurement technologies. In particular, we focus on the estimation of sagittal plane ankle joint stiffness in dorsiflexion/plantar flexion. For this estimation, a novel nonlinear biomechanical model of the lower leg was formulated that is driven by electromyographic signals. The model incorporates a two-dimensional kinematic description in the sagittal plane for the calculation of muscle lever arms and torques. To reduce estimation errors due to model uncertainties, a filtering algorithm is necessary that employs segmental orientation sensor measurements. Because of the model's inherent nonlinearities and nonsmooth dynamics, a square-root cubature Kalman filter was developed. The performance of the novel estimation approach was evaluated in silico and in an experimental procedure. The experimental study was conducted with body-worn sensors and a test-bench that was specifically designed to obtain reference angle and torque measurements for a single joint. Results show that the filter is able to reconstruct joint angle positions, velocities and torque, as well as, joint stiffness during experimental test bench movements.

  14. A new geometric-based model to accurately estimate arm and leg inertial estimates.

    Science.gov (United States)

    Wicke, Jason; Dumas, Geneviève A

    2014-06-03

    Segment estimates of mass, center of mass and moment of inertia are required input parameters to analyze the forces and moments acting across the joints. The objectives of this study were to propose a new geometric model for limb segments, to evaluate it against criterion values obtained from DXA, and to compare its performance to five other popular models. Twenty five female and 24 male college students participated in the study. For the criterion measures, the participants underwent a whole body DXA scan, and estimates for segment mass, center of mass location, and moment of inertia (frontal plane) were directly computed from the DXA mass units. For the new model, the volume was determined from two standing frontal and sagittal photographs. Each segment was modeled as a stack of slices, the sections of which were ellipses if they are not adjoining another segment and sectioned ellipses if they were adjoining another segment (e.g. upper arm and trunk). Length of axes of the ellipses was obtained from the photographs. In addition, a sex-specific, non-uniform density function was developed for each segment. A series of anthropometric measurements were also taken by directly following the definitions provided of the different body segment models tested, and the same parameters determined for each model. Comparison of models showed that estimates from the new model were consistently closer to the DXA criterion than those from the other models, with an error of less than 5% for mass and moment of inertia and less than about 6% for center of mass location. Copyright © 2014. Published by Elsevier Ltd.

  15. Estimation of Indirect Effects in the Presence of Unmeasured Confounding for the Mediator-Outcome Relationship in a Multilevel 2-1-1 Mediation Model

    Science.gov (United States)

    Talloen, Wouter; Moerkerke, Beatrijs; Loeys, Tom; De Naeghel, Jessie; Van Keer, Hilde; Vansteelandt, Stijn

    2016-01-01

    To assess the direct and indirect effect of an intervention, multilevel 2-1-1 studies with intervention randomized at the upper (class) level and mediator and outcome measured at the lower (student) level are frequently used in educational research. In such studies, the mediation process may flow through the student-level mediator (the within…

  16. More Eyes, (No Guns,) Less Crime: Estimating the Effects of Unarmed Private Patrols on Crime Using a Bayesian Structural Time-Series Model

    NARCIS (Netherlands)

    P.P. Liu (Paul); M. Fabbri (Marco)

    2016-01-01

    textabstractThis work studies the effect of unarmed private security patrols on crime. We make use of a initiative, triggered by an arguably exogenous events, consisting in hiring unarmed private security agents to patrol, observe and report to ordinary police criminal activities within a

  17. Combining Empirical and Stochastic Models for Extreme Floods Estimation

    Science.gov (United States)

    Zemzami, M.; Benaabidate, L.

    2013-12-01

    Hydrological models can be defined as physical, mathematical or empirical. The latter class uses mathematical equations independent of the physical processes involved in the hydrological system. The linear regression and Gradex (Gradient of Extreme values) are classic examples of empirical models. However, conventional empirical models are still used as a tool for hydrological analysis by probabilistic approaches. In many regions in the world, watersheds are not gauged. This is true even in developed countries where the gauging network has continued to decline as a result of the lack of human and financial resources. Indeed, the obvious lack of data in these watersheds makes it impossible to apply some basic empirical models for daily forecast. So we had to find a combination of rainfall-runoff models in which it would be possible to create our own data and use them to estimate the flow. The estimated design floods would be a good choice to illustrate the difficulties facing the hydrologist for the construction of a standard empirical model in basins where hydrological information is rare. The construction of the climate-hydrological model, which is based on frequency analysis, was established to estimate the design flood in the Anseghmir catchments, Morocco. The choice of using this complex model returns to its ability to be applied in watersheds where hydrological information is not sufficient. It was found that this method is a powerful tool for estimating the design flood of the watershed and also other hydrological elements (runoff, volumes of water...).The hydrographic characteristics and climatic parameters were used to estimate the runoff, water volumes and design flood for different return periods.

  18. Estimated effects of temperature on secondary organic aerosol concentrations.

    Science.gov (United States)

    Sheehan, P E; Bowman, F M

    2001-06-01

    The temperature-dependence of secondary organic aerosol (SOA) concentrations is explored using an absorptive-partitioning model under a variety of simplified atmospheric conditions. Experimentally determined partitioning parameters for high yield aromatics are used. Variation of vapor pressures with temperature is assumed to be the main source of temperature effects. Known semivolatile products are used to define a modeling range of vaporization enthalpy of 10-25 kcal/mol-1. The effect of diurnal temperature variations on model predictions for various assumed vaporization enthalpies, precursor emission rates, and primary organic concentrations is explored. Results show that temperature is likely to have a significant influence on SOA partitioning and resulting SOA concentrations. A 10 degrees C decrease in temperature is estimated to increase SOA yields by 20-150%, depending on the assumed vaporization enthalpy. In model simulations, high daytime temperatures tend to reduce SOA concentrations by 16-24%, while cooler nighttime temperatures lead to a 22-34% increase, compared to constant temperature conditions. Results suggest that currently available constant temperature partitioning coefficients do not adequately represent atmospheric SOA partitioning behavior. Air quality models neglecting the temperature dependence of partitioning are expected to underpredict peak SOA concentrations as well as mistime their occurrence.

  19. Time-integrated activity coefficient estimation for radionuclide therapy using PET and a pharmacokinetic model: A simulation study on the effect of sampling schedule and noise

    Energy Technology Data Exchange (ETDEWEB)

    Hardiansyah, Deni [Medical Radiation Physics/Radiation Protection, Medical Faculty Mannheim, Universitätsmedizin Mannheim, Heidelberg University, Mannheim 68167, Germany and Department of Radiation Oncology, Medical Faculty Mannheim, Universitätsmedizin Mannheim, Heidelberg University, Mannheim 68167 (Germany); Guo, Wei; Glatting, Gerhard, E-mail: gerhard.glatting@medma.uni-heidelberg.de [Medical Radiation Physics/Radiation Protection, Medical Faculty Mannheim, Universitätsmedizin Mannheim, Heidelberg University, Mannheim 68167 (Germany); Kletting, Peter [Department of Nuclear Medicine, Ulm University, Ulm 89081 (Germany); Mottaghy, Felix M. [Department of Nuclear Medicine, University Hospital, RWTH Aachen University, Aachen 52074, Germany and Department of Nuclear Medicine, Maastricht University Medical Center MUMC+, Maastricht 6229 (Netherlands)

    2016-09-15

    Purpose: The aim of this study was to investigate the accuracy of PET-based treatment planning for predicting the time-integrated activity coefficients (TIACs). Methods: The parameters of a physiologically based pharmacokinetic (PBPK) model were fitted to the biokinetic data of 15 patients to derive assumed true parameters and were used to construct true mathematical patient phantoms (MPPs). Biokinetics of 150 MBq {sup 68}Ga-DOTATATE-PET was simulated with different noise levels [fractional standard deviation (FSD) 10%, 1%, 0.1%, and 0.01%], and seven combinations of measurements at 30 min, 1 h, and 4 h p.i. PBPK model parameters were fitted to the simulated noisy PET data using population-based Bayesian parameters to construct predicted MPPs. Therapy simulations were performed as 30 min infusion of {sup 90}Y-DOTATATE of 3.3 GBq in both true and predicted MPPs. Prediction accuracy was then calculated as relative variability v{sub organ} between TIACs from both MPPs. Results: Large variability values of one time-point protocols [e.g., FSD = 1%, 240 min p.i., v{sub kidneys} = (9 ± 6)%, and v{sub tumor} = (27 ± 26)%] show inaccurate prediction. Accurate TIAC prediction of the kidneys was obtained for the case of two measurements (1 and 4 h p.i.), e.g., FSD = 1%, v{sub kidneys} = (7 ± 3)%, and v{sub tumor} = (22 ± 10)%, or three measurements, e.g., FSD = 1%, v{sub kidneys} = (7 ± 3)%, and v{sub tumor} = (22 ± 9)%. Conclusions: {sup 68}Ga-DOTATATE-PET measurements could possibly be used to predict the TIACs of {sup 90}Y-DOTATATE when using a PBPK model and population-based Bayesian parameters. The two time-point measurement at 1 and 4 h p.i. with a noise up to FSD = 1% allows an accurate prediction of the TIACs in kidneys.

  20. The Effect of Some Estimators of Between-Study Variance on Random

    African Journals Online (AJOL)

    Samson Henry Dogo

    the first step to such objectivity (Schmidt, 1992), allows to combine results from many studies and accurately ... Schmidt, 2000) due to its ability to account for variation in effects across the studies. Random-effects model ... (2015), and each of the estimators differs in terms of their bias and precision in estimation. By definition ...

  1. Parameter and State Estimator for State Space Models

    Directory of Open Access Journals (Sweden)

    Ruifeng Ding

    2014-01-01

    Full Text Available This paper proposes a parameter and state estimator for canonical state space systems from measured input-output data. The key is to solve the system state from the state equation and to substitute it into the output equation, eliminating the state variables, and the resulting equation contains only the system inputs and outputs, and to derive a least squares parameter identification algorithm. Furthermore, the system states are computed from the estimated parameters and the input-output data. Convergence analysis using the martingale convergence theorem indicates that the parameter estimates converge to their true values. Finally, an illustrative example is provided to show that the proposed algorithm is effective.

  2. Analyzing Health-Related Quality of Life Data to Estimate Parameters for Cost-Effectiveness Models: An Example Using Longitudinal EQ-5D Data from the SHIFT Randomized Controlled Trial.

    Science.gov (United States)

    Griffiths, Alison; Paracha, Noman; Davies, Andrew; Branscombe, Neil; Cowie, Martin R; Sculpher, Mark

    2017-03-01

    The aim of this article is to discuss methods used to analyze health-related quality of life (HRQoL) data from randomized controlled trials (RCTs) for decision analytic models. The analysis presented in this paper was used to provide HRQoL data for the ivabradine health technology assessment (HTA) submission in chronic heart failure. We have used a large, longitudinal EuroQol five-dimension questionnaire (EQ-5D) dataset from the Systolic Heart Failure Treatment with the I f Inhibitor Ivabradine Trial (SHIFT) (clinicaltrials.gov: NCT02441218) to illustrate issues and methods. HRQoL weights (utility values) were estimated from a mixed regression model developed using SHIFT EQ-5D data (n = 5313 patients). The regression model was used to predict HRQoL outcomes according to treatment, patient characteristics, and key clinical outcomes for patients with a heart rate ≥75 bpm. Ivabradine was associated with an HRQoL weight gain of 0.01. HRQoL weights differed according to New York Heart Association (NYHA) class (NYHA I-IV, no hospitalization: standard care 0.82-0.46; ivabradine 0.84-0.47). A reduction in HRQoL weight was associated with hospitalizations within 30 days of an HRQoL assessment visit, with this reduction varying by NYHA class [-0.07 (NYHA I) to -0.21 (NYHA IV)]. The mixed model explained variation in EQ-5D data according to key clinical outcomes and patient characteristics, providing essential information for long-term predictions of patient HRQoL in the cost-effectiveness model. This model was also used to estimate the loss in HRQoL associated with hospitalizations. In SHIFT many hospitalizations did not occur close to EQ-5D visits; hence, any temporary changes in HRQoL associated with such events would not be captured fully in observed RCT evidence, but could be predicted in our cost-effectiveness analysis using the mixed model. Given the large reduction in hospitalizations associated with ivabradine this was an important feature of the analysis. The

  3. Biomass models to estimate carbon stocks for hardwood tree species

    Energy Technology Data Exchange (ETDEWEB)

    Ruiz-Peinado, R.; Montero, G.; Rio, M. del

    2012-11-01

    To estimate forest carbon pools from forest inventories it is necessary to have biomass models or biomass expansion factors. In this study, tree biomass models were developed for the main hardwood forest species in Spain: Alnus glutinosa, Castanea sativa, Ceratonia siliqua, Eucalyptus globulus, Fagus sylvatica, Fraxinus angustifolia, Olea europaea var. sylvestris, Populus x euramericana, Quercus canariensis, Quercus faginea, Quercus ilex, Quercus pyrenaica and Quercus suber. Different tree biomass components were considered: stem with bark, branches of different sizes, above and belowground biomass. For each species, a system of equations was fitted using seemingly unrelated regression, fulfilling the additivity property between biomass components. Diameter and total height were explored as independent variables. All models included tree diameter whereas for the majority of species, total height was only considered in the stem biomass models and in some of the branch models. The comparison of the new biomass models with previous models fitted separately for each tree component indicated an improvement in the accuracy of the models. A mean reduction of 20% in the root mean square error and a mean increase in the model efficiency of 7% in comparison with recently published models. So, the fitted models allow estimating more accurately the biomass stock in hardwood species from the Spanish National Forest Inventory data. (Author) 45 refs.

  4. A single model procedure for tank calibration function estimation

    International Nuclear Information System (INIS)

    York, J.C.; Liebetrau, A.M.

    1995-01-01

    Reliable tank calibrations are a vital component of any measurement control and accountability program for bulk materials in a nuclear reprocessing facility. Tank volume calibration functions used in nuclear materials safeguards and accountability programs are typically constructed from several segments, each of which is estimated independently. Ideally, the segments correspond to structural features in the tank. In this paper the authors use an extension of the Thomas-Liebetrau model to estimate the entire calibration function in a single step. This procedure automatically takes significant run-to-run differences into account and yields an estimate of the entire calibration function in one operation. As with other procedures, the first step is to define suitable calibration segments. Next, a polynomial of low degree is specified for each segment. In contrast with the conventional practice of constructing a separate model for each segment, this information is used to set up the design matrix for a single model that encompasses all of the calibration data. Estimation of the model parameters is then done using conventional statistical methods. The method described here has several advantages over traditional methods. First, modeled run-to-run differences can be taken into account automatically at the estimation step. Second, no interpolation is required between successive segments. Third, variance estimates are based on all the data, rather than that from a single segment, with the result that discontinuities in confidence intervals at segment boundaries are eliminated. Fourth, the restrictive assumption of the Thomas-Liebetrau method, that the measured volumes be the same for all runs, is not required. Finally, the proposed methods are readily implemented using standard statistical procedures and widely-used software packages

  5. Groundwater Modelling For Recharge Estimation Using Satellite Based Evapotranspiration

    Science.gov (United States)

    Soheili, Mahmoud; (Tom) Rientjes, T. H. M.; (Christiaan) van der Tol, C.

    2017-04-01

    Groundwater movement is influenced by several factors and processes in the hydrological cycle, from which, recharge is of high relevance. Since the amount of aquifer extractable water directly relates to the recharge amount, estimation of recharge is a perquisite of groundwater resources management. Recharge is highly affected by water loss mechanisms the major of which is actual evapotranspiration (ETa). It is, therefore, essential to have detailed assessment of ETa impact on groundwater recharge. The objective of this study was to evaluate how recharge was affected when satellite-based evapotranspiration was used instead of in-situ based ETa in the Salland area, the Netherlands. The Methodology for Interactive Planning for Water Management (MIPWA) model setup which includes a groundwater model for the northern part of the Netherlands was used for recharge estimation. The Surface Energy Balance Algorithm for Land (SEBAL) based actual evapotranspiration maps from Waterschap Groot Salland were also used. Comparison of SEBAL based ETa estimates with in-situ abased estimates in the Netherlands showed that these SEBAL estimates were not reliable. As such results could not serve for calibrating root zone parameters in the CAPSIM model. The annual cumulative ETa map produced by the model showed that the maximum amount of evapotranspiration occurs in mixed forest areas in the northeast and a portion of central parts. Estimates ranged from 579 mm to a minimum of 0 mm in the highest elevated areas with woody vegetation in the southeast of the region. Variations in mean seasonal hydraulic head and groundwater level for each layer showed that the hydraulic gradient follows elevation in the Salland area from southeast (maximum) to northwest (minimum) of the region which depicts the groundwater flow direction. The mean seasonal water balance in CAPSIM part was evaluated to represent recharge estimation in the first layer. The highest recharge estimated flux was for autumn

  6. Identification and estimation of survivor average causal effects.

    Science.gov (United States)

    Tchetgen Tchetgen, Eric J

    2014-09-20

    In longitudinal studies, outcomes ascertained at follow-up are typically undefined for individuals who die prior to the follow-up visit. In such settings, outcomes are said to be truncated by death and inference about the effects of a point treatment or exposure, restricted to individuals alive at the follow-up visit, could be biased even if as in experimental studies, treatment assignment were randomized. To account for truncation by death, the survivor average causal effect (SACE) defines the effect of treatment on the outcome for the subset of individuals who would have survived regardless of exposure status. In this paper, the author nonparametrically identifies SACE by leveraging post-exposure longitudinal correlates of survival and outcome that may also mediate the exposure effects on survival and outcome. Nonparametric identification is achieved by supposing that the longitudinal data arise from a certain nonparametric structural equations model and by making the monotonicity assumption that the effect of exposure on survival agrees in its direction across individuals. A novel weighted analysis involving a consistent estimate of the survival process is shown to produce consistent estimates of SACE. A data illustration is given, and the methods are extended to the context of time-varying exposures. We discuss a sensitivity analysis framework that relaxes assumptions about independent errors in the nonparametric structural equations model and may be used to assess the extent to which inference may be altered by a violation of key identifying assumptions. © 2014 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd.

  7. Model-Based Estimation of Ankle Joint Stiffness

    Directory of Open Access Journals (Sweden)

    Berno J. E. Misgeld

    2017-03-01

    Full Text Available We address the estimation of biomechanical parameters with wearable measurement technologies. In particular, we focus on the estimation of sagittal plane ankle joint stiffness in dorsiflexion/plantar flexion. For this estimation, a novel nonlinear biomechanical model of the lower leg was formulated that is driven by electromyographic signals. The model incorporates a two-dimensional kinematic description in the sagittal plane for the calculation of muscle lever arms and torques. To reduce estimation errors due to model uncertainties, a filtering algorithm is necessary that employs segmental orientation sensor measurements. Because of the model’s inherent nonlinearities and nonsmooth dynamics, a square-root cubature Kalman filter was developed. The performance of the novel estimation approach was evaluated in silico and in an experimental procedure. The experimental study was conducted with body-worn sensors and a test-bench that was specifically designed to obtain reference angle and torque measurements for a single joint. Results show that the filter is able to reconstruct joint angle positions, velocities and torque, as well as, joint stiffness during experimental test bench movements.

  8. Model-Based Estimation of Ankle Joint Stiffness

    Science.gov (United States)

    Misgeld, Berno J. E.; Zhang, Tony; Lüken, Markus J.; Leonhardt, Steffen

    2017-01-01

    We address the estimation of biomechanical parameters with wearable measurement technologies. In particular, we focus on the estimation of sagittal plane ankle joint stiffness in dorsiflexion/plantar flexion. For this estimation, a novel nonlinear biomechanical model of the lower leg was formulated that is driven by electromyographic signals. The model incorporates a two-dimensional kinematic description in the sagittal plane for the calculation of muscle lever arms and torques. To reduce estimation errors due to model uncertainties, a filtering algorithm is necessary that employs segmental orientation sensor measurements. Because of the model’s inherent nonlinearities and nonsmooth dynamics, a square-root cubature Kalman filter was developed. The performance of the novel estimation approach was evaluated in silico and in an experimental procedure. The experimental study was conducted with body-worn sensors and a test-bench that was specifically designed to obtain reference angle and torque measurements for a single joint. Results show that the filter is able to reconstruct joint angle positions, velocities and torque, as well as, joint stiffness during experimental test bench movements. PMID:28353683

  9. Motion estimation by data assimilation in reduced dynamic models

    International Nuclear Information System (INIS)

    Drifi, Karim

    2013-01-01

    Motion estimation is a major challenge in the field of image sequence analysis. This thesis is a study of the dynamics of geophysical flows visualized by satellite imagery. Satellite image sequences are currently underused for the task of motion estimation. A good understanding of geophysical flows allows a better analysis and forecast of phenomena in domains such as oceanography and meteorology. Data assimilation provides an excellent framework for achieving a compromise between heterogeneous data, especially numerical models and observations. Hence, in this thesis we set out to apply variational data assimilation methods to estimate motion on image sequences. As one of the major drawbacks of applying these assimilation techniques is the considerable computation time and memory required, we therefore define and use a model reduction method in order to significantly decrease the necessary computation time and the memory. We then explore the possibilities that reduced models provide for motion estimation, particularly the possibility of strictly imposing some known constraints on the computed solutions. In particular, we show how to estimate a divergence free motion with boundary conditions on a complex spatial domain [fr

  10. Estimating Drilling Cost and Duration Using Copulas Dependencies Models

    Directory of Open Access Journals (Sweden)

    M. Al Kindi

    2017-03-01

    Full Text Available Estimation of drilling budget and duration is a high-level challenge for oil and gas industry. This is due to the many uncertain activities in the drilling procedure such as material prices, overhead cost, inflation, oil prices, well type, and depth of drilling. Therefore, it is essential to consider all these uncertain variables and the nature of relationships between them. This eventually leads into the minimization of the level of uncertainty and yet makes a "good" estimation points for budget and duration given the well type. In this paper, the copula probability theory is used in order to model the dependencies between cost/duration and MRI (mechanical risk index. The MRI is a mathematical computation, which relates various drilling factors such as: water depth, measured depth, true vertical depth in addition to mud weight and horizontal displacement. In general, the value of MRI is utilized as an input for the drilling cost and duration estimations. Therefore, modeling the uncertain dependencies between MRI and both cost and duration using copulas is important. The cost and duration estimates for each well were extracted from the copula dependency model where research study simulate over 10,000 scenarios. These new estimates were later compared to the actual data in order to validate the performance of the procedure. Most of the wells show moderate - weak relationship of MRI dependence, which means that the variation in these wells can be related to MRI but to the extent that it is not the primary source.

  11. Finite mixture model: A maximum likelihood estimation approach on time series data

    Science.gov (United States)

    Yen, Phoong Seuk; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad

    2014-09-01

    Recently, statistician emphasized on the fitting of finite mixture model by using maximum likelihood estimation as it provides asymptotic properties. In addition, it shows consistency properties as the sample sizes increases to infinity. This illustrated that maximum likelihood estimation is an unbiased estimator. Moreover, the estimate parameters obtained from the application of maximum likelihood estimation have smallest variance as compared to others statistical method as the sample sizes increases. Thus, maximum likelihood estimation is adopted in this paper to fit the two-component mixture model in order to explore the relationship between rubber price and exchange rate for Malaysia, Thailand, Philippines and Indonesia. Results described that there is a negative effect among rubber price and exchange rate for all selected countries.

  12. Nitrogen Fertilization Effects on Net Ecosystem and Net Primary Productivities as Determined from Flux Tower, Biometric, and Model Estimates for a Coastal Douglas-fir Forest in British Columbia

    Science.gov (United States)

    Trofymow, J. A.; Metsaranta, J. M.; Black, T. A.; Jassal, R. S.; Filipescu, C.

    2013-12-01

    In coastal BC, 6,000-10,000 ha of public and significant areas of private forest land are annually fertilized with nitrogen, with or without thinning, to increase merchantable wood and reduce rotation age. Fertilization has also been viewed as a way to increase carbon (C) sequestration in forests and obtain C offsets. Such offset projects must demonstrate additionality with reference to a baseline and include monitoring to verify net C gains over the project period. Models in combination with field-plot measurements are currently the accepted methods for most C offset protocols. On eastern Vancouver Island, measurements of net ecosystem production (NEP), ecosystem respiration (Re) and gross primary productivity (GPP) using the eddy-covariance (EC) technique as well as component C fluxes and stocks have been made since 1998 in an intermediate-aged Douglas-fir dominated forest planted in 1949. In January 2007 an area around the EC flux tower was aerially fertilized with 200 kg urea-N ha-1. Ground plots in the fertilized area and an adjacent unfertilized control area were also monitored for soil (Rs) and heterotrophic (Rh) respiration, litterfall, and tree growth. To determine fertilization effects on whole tree growth, sample trees were felled in both areas for the 4-year (2003-06) pre- and the 4-year (2007-10) post-fertilization periods and were compared with EC NEP estimates and tree-ring based NEP estimates from Carbon Budget Model - Canadian Forest Sector (CBM-CFS3) for the same periods. Empirical equations using climate and C fluxes from 1998-2006 were derived to estimate what the EC fluxes would have been in 2007-10 for the fertilized area had it been unfertilized. Mean EC NEP for 2007-10 was 561 g C m2 y-1 , a 64% increase above pre-fertilization NEP (341 g C m2 y-1) or 28% increase above estimated unfertilized NEP (438 g C m2 y-1). Most of the increase was attributed to increased tree C uptake (i.e., GPP), with little change in Re. In 2007 fertilization

  13. Modeling, estimation and optimal filtration in signal processing

    CERN Document Server

    Najim, Mohamed

    2010-01-01

    The purpose of this book is to provide graduate students and practitioners with traditional methods and more recent results for model-based approaches in signal processing.Firstly, discrete-time linear models such as AR, MA and ARMA models, their properties and their limitations are introduced. In addition, sinusoidal models are addressed.Secondly, estimation approaches based on least squares methods and instrumental variable techniques are presented.Finally, the book deals with optimal filters, i.e. Wiener and Kalman filtering, and adaptive filters such as the RLS, the LMS and the

  14. Working covariance model selection for generalized estimating equations.

    Science.gov (United States)

    Carey, Vincent J; Wang, You-Gan

    2011-11-20

    We investigate methods for data-based selection of working covariance models in the analysis of correlated data with generalized estimating equations. We study two selection criteria: Gaussian pseudolikelihood and a geodesic distance based on discrepancy between model-sensitive and model-robust regression parameter covariance estimators. The Gaussian pseudolikelihood is found in simulation to be reasonably sensitive for several response distributions and noncanonical mean-variance relations for longitudinal data. Application is also made to a clinical dataset. Assessment of adequacy of both correlation and variance models for longitudinal data should be routine in applications, and we describe open-source software supporting this practice. Copyright © 2011 John Wiley & Sons, Ltd.

  15. HIGH DIMENSIONAL COVARIANCE MATRIX ESTIMATION IN APPROXIMATE FACTOR MODELS.

    Science.gov (United States)

    Fan, Jianqing; Liao, Yuan; Mincheva, Martina

    2011-01-01

    The variance covariance matrix plays a central role in the inferential theories of high dimensional factor models in finance and economics. Popular regularization methods of directly exploiting sparsity are not directly applicable to many financial problems. Classical methods of estimating the covariance matrices are based on the strict factor models, assuming independent idiosyncratic components. This assumption, however, is restrictive in practical applications. By assuming sparse error covariance matrix, we allow the presence of the cross-sectional correlation even after taking out common factors, and it enables us to combine the merits of both methods. We estimate the sparse covariance using the adaptive thresholding technique as in Cai and Liu (2011), taking into account the fact that direct observations of the idiosyncratic components are unavailable. The impact of high dimensionality on the covariance matrix estimation based on the factor structure is then studied.

  16. Bayesian Estimation of Small Effects in Exercise and Sports Science.

    Directory of Open Access Journals (Sweden)

    Kerrie L Mengersen

    Full Text Available The aim of this paper is to provide a Bayesian formulation of the so-called magnitude-based inference approach to quantifying and interpreting effects, and in a case study example provide accurate probabilistic statements that correspond to the intended magnitude-based inferences. The model is described in the context of a published small-scale athlete study which employed a magnitude-based inference approach to compare the effect of two altitude training regimens (live high-train low (LHTL, and intermittent hypoxic exposure (IHE on running performance and blood measurements of elite triathletes. The posterior distributions, and corresponding point and interval estimates, for the parameters and associated effects and comparisons of interest, were estimated using Markov chain Monte Carlo simulations. The Bayesian analysis was shown to provide more direct probabilistic comparisons of treatments and able to identify small effects of interest. The approach avoided asymptotic assumptions and overcame issues such as multiple testing. Bayesian analysis of unscaled effects showed a probability of 0.96 that LHTL yields a substantially greater increase in hemoglobin mass than IHE, a 0.93 probability of a substantially greater improvement in running economy and a greater than 0.96 probability that both IHE and LHTL yield a substantially greater improvement in maximum blood lactate concentration compared to a Placebo. The conclusions are consistent with those obtained using a 'magnitude-based inference' approach that has been promoted in the field. The paper demonstrates that a fully Bayesian analysis is a simple and effective way of analysing small effects, providing a rich set of results that are straightforward to interpret in terms of probabilistic statements.

  17. Bayesian Estimation of Small Effects in Exercise and Sports Science.

    Science.gov (United States)

    Mengersen, Kerrie L; Drovandi, Christopher C; Robert, Christian P; Pyne, David B; Gore, Christopher J

    2016-01-01

    The aim of this paper is to provide a Bayesian formulation of the so-called magnitude-based inference approach to quantifying and interpreting effects, and in a case study example provide accurate probabilistic statements that correspond to the intended magnitude-based inferences. The model is described in the context of a published small-scale athlete study which employed a magnitude-based inference approach to compare the effect of two altitude training regimens (live high-train low (LHTL), and intermittent hypoxic exposure (IHE)) on running performance and blood measurements of elite triathletes. The posterior distributions, and corresponding point and interval estimates, for the parameters and associated effects and comparisons of interest, were estimated using Markov chain Monte Carlo simulations. The Bayesian analysis was shown to provide more direct probabilistic comparisons of treatments and able to identify small effects of interest. The approach avoided asymptotic assumptions and overcame issues such as multiple testing. Bayesian analysis of unscaled effects showed a probability of 0.96 that LHTL yields a substantially greater increase in hemoglobin mass than IHE, a 0.93 probability of a substantially greater improvement in running economy and a greater than 0.96 probability that both IHE and LHTL yield a substantially greater improvement in maximum blood lactate concentration compared to a Placebo. The conclusions are consistent with those obtained using a 'magnitude-based inference' approach that has been promoted in the field. The paper demonstrates that a fully Bayesian analysis is a simple and effective way of analysing small effects, providing a rich set of results that are straightforward to interpret in terms of probabilistic statements.

  18. Comparison of physically based catchment models for estimating Phosphorus losses

    OpenAIRE

    Nasr, Ahmed Elssidig; Bruen, Michael

    2003-01-01

    As part of a large EPA-funded research project, coordinated by TEAGASC, the Centre for Water Resources Research at UCD reviewed the available distributed physically based catchment models with a potential for use in estimating phosphorous losses for use in implementing the Water Framework Directive. Three models, representative of different levels of approach and complexity, were chosen and were implemented for a number of Irish catchments. This paper reports on (i) the lessons and experience...

  19. A model-based approach to estimating forest area

    Science.gov (United States)

    Ronald E. McRoberts

    2006-01-01

    A logistic regression model based on forest inventory plot data and transformations of Landsat Thematic Mapper satellite imagery was used to predict the probability of forest for 15 study areas in Indiana, USA, and 15 in Minnesota, USA. Within each study area, model-based estimates of forest area were obtained for circular areas with radii of 5 km, 10 km, and 15 km and...

  20. Estimating and Testing Mediation Effects with Censored Data

    Science.gov (United States)

    Wang, Lijuan; Zhang, Zhiyong

    2011-01-01

    This study investigated influences of censored data on mediation analysis. Mediation effect estimates can be biased and inefficient with censoring on any one of the input, mediation, and output variables. A Bayesian Tobit approach was introduced to estimate and test mediation effects with censored data. Simulation results showed that the Bayesian…

  1. Estimation of the effective distribution coefficient from the solubility constant

    International Nuclear Information System (INIS)

    Wang, Yug-Yea; Yu, C.

    1994-01-01

    An updated version of RESRAD has been developed by Argonne National Laboratory for the US Department of Energy to derive site-specific soil guidelines for residual radioactive material. In this updated version, many new features have been added to the, RESRAD code. One of the options is that a user can input a solubility constant to limit the leaching of contaminants. The leaching model used in the code requires the input of an empirical distribution coefficient, K d , which represents the ratio of the solute concentration in soil to that in solution under equilibrium conditions. This paper describes the methodology developed to estimate an effective distribution coefficient, Kd, from the user-input solubility constant and the use of the effective K d for predicting the leaching of contaminants

  2. Constrained Optimization Approaches to Estimation of Structural Models

    DEFF Research Database (Denmark)

    Iskhakov, Fedor; Rust, John; Schjerning, Bertel

    2015-01-01

    We revisit the comparison of mathematical programming with equilibrium constraints (MPEC) and nested fixed point (NFXP) algorithms for estimating structural dynamic models by Su and Judd (SJ, 2012). They used an inefficient version of the nested fixed point algorithm that relies on successive app...

  3. Constrained Optimization Approaches to Estimation of Structural Models

    DEFF Research Database (Denmark)

    Iskhakov, Fedor; Jinhyuk, Lee; Rust, John

    2016-01-01

    We revisit the comparison of mathematical programming with equilibrium constraints (MPEC) and nested fixed point (NFXP) algorithms for estimating structural dynamic models by Su and Judd (SJ, 2012). Their implementation of the nested fixed point algorithm used successive approximations to solve t...

  4. Parameter estimation in stochastic mammogram model by heuristic optimization techniques.

    NARCIS (Netherlands)

    Selvan, S.E.; Xavier, C.C.; Karssemeijer, N.; Sequeira, J.; Cherian, R.A.; Dhala, B.Y.

    2006-01-01

    The appearance of disproportionately large amounts of high-density breast parenchyma in mammograms has been found to be a strong indicator of the risk of developing breast cancer. Hence, the breast density model is popular for risk estimation or for monitoring breast density change in prevention or

  5. A general predictive model for estimating monthly ecosystem evapotranspiration

    Science.gov (United States)

    Ge Sun; Karrin Alstad; Jiquan Chen; Shiping Chen; Chelcy R. Ford; al. et.

    2011-01-01

    Accurately quantifying evapotranspiration (ET) is essential for modelling regional-scale ecosystem water balances. This study assembled an ET data set estimated from eddy flux and sapflow measurements for 13 ecosystems across a large climatic and management gradient from the United States, China, and Australia. Our objectives were to determine the relationships among...

  6. Revaluating the Tanzi-Model to Estimate the Underground Economy

    NARCIS (Netherlands)

    Ferwerda, J.; Deleanu, I.; Unger, B.

    Since the early 1980s, the interest in the nature and size of the non-measured economy (both the informal and the illegal one) was born among researchers in the US. Since then, several models to estimate the shadow and/or the underground economy appeared in the literature, each with its own

  7. Bayesian nonparametric estimation of hazard rate in monotone Aalen model

    Czech Academy of Sciences Publication Activity Database

    Timková, Jana

    2014-01-01

    Roč. 50, č. 6 (2014), s. 849-868 ISSN 0023-5954 Institutional support: RVO:67985556 Keywords : Aalen model * Bayesian estimation * MCMC Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.541, year: 2014 http://library.utia.cas.cz/separaty/2014/SI/timkova-0438210.pdf

  8. Models for the analytic estimation of low energy photon albedo

    International Nuclear Information System (INIS)

    Simovic, R.; Markovic, S.; Ljubenov, V.

    2005-01-01

    This paper shows some monoenergetic models for estimation of photon reflection in the energy range from 20 keV to 80 keV. Using the DP0 approximation of the H-function we have derived the analytic expressions of the η and R functions in purpose to facilitate photon reflection analyses as well as the radiation shield designee. (author) [sr

  9. Empirical Models for the Estimation of Global Solar Radiation in ...

    African Journals Online (AJOL)

    Empirical Models for the Estimation of Global Solar Radiation in Yola, Nigeria. ... and average daily wind speed (WS) for the interval of three years (2010 – 2012) measured using various instruments for Yola of recorded data collected from the Center for Atmospheric Research (CAR), Anyigba are presented and analyzed.

  10. Remote sensing estimates of impervious surfaces for pluvial flood modelling

    DEFF Research Database (Denmark)

    Kaspersen, Per Skougaard; Drews, Martin

    This paper investigates the accuracy of medium resolution (MR) satellite imagery in estimating impervious surfaces for European cities at the detail required for pluvial flood modelling. Using remote sensing techniques enables precise and systematic quantification of the influence of the past 30...

  11. Battery electric vehicle energy consumption modelling for range estimation

    NARCIS (Netherlands)

    Wang, J.; Besselink, I.J.M.; Nijmeijer, H.

    2017-01-01

    Range anxiety is considered as one of the major barriers to the mass adoption of battery electric vehicles (BEVs). One method to solve this problem is to provide accurate range estimation to the driver. This paper describes a vehicle energy consumption model considering the influence of weather

  12. Review Genetic prediction models and heritability estimates for ...

    African Journals Online (AJOL)

    edward

    2015-05-09

    May 9, 2015 ... Heritability estimates for functional longevity have been expressed on an original or a logarithmic scale with PH models. Ducrocq & Casella (1996) defined heritability on a logarithmic scale and modified under simulation to incorporate the tri-gamma function (γ) as used by Sasaki et al. (2012) and Terawaki ...

  13. Mathematical models for estimating radio channels utilization when ...

    African Journals Online (AJOL)

    Definition of the radio channel utilization indicator is given. Mathematical models for radio channels utilization assessment by real-time flows transfer in the wireless self-organized network are presented. Estimated experiments results according to the average radio channel utilization productivity with and without buffering of ...

  14. Parameter Estimation and Model Selection for Mixtures of Truncated Exponentials

    DEFF Research Database (Denmark)

    Langseth, Helge; Nielsen, Thomas Dyhre; Rumí, Rafael

    2010-01-01

    Bayesian networks with mixtures of truncated exponentials (MTEs) support efficient inference algorithms and provide a flexible way of modeling hybrid domains (domains containing both discrete and continuous variables). On the other hand, estimating an MTE from data has turned out to be a difficul...

  15. Model-based state estimator for an intelligent tire

    NARCIS (Netherlands)

    Goos, J.; Teerhuis, A. P.; Schmeitz, A. J.C.; Besselink, I.; Nijmeijer, H.

    2017-01-01

    In this work a Tire State Estimator (TSE) is developed and validated using data from a tri-axial accelerometer, installed at the inner liner of the tire. The Flexible Ring Tire (FRT) model is proposed to calculate the tire deformation. For a rolling tire, this deformation is transformed into

  16. Model-based State Estimator for an Intelligent Tire

    NARCIS (Netherlands)

    Goos, J.; Teerhuis, A.P.; Schmeitz, A.J.C.; Besselink, I.J.M.; Nijmeijer, H.

    2016-01-01

    In this work a Tire State Estimator (TSE) is developed and validated using data from a tri-axial accelerometer, installed at the inner liner of the tire. The Flexible Ring Tire (FRT) model is proposed to calculate the tire deformation. For a rolling tire, this deformation is transformed into

  17. Temporal validation for landsat-based volume estimation model

    Science.gov (United States)

    Renaldo J. Arroyo; Emily B. Schultz; Thomas G. Matney; David L. Evans; Zhaofei Fan

    2015-01-01

    Satellite imagery can potentially reduce the costs and time associated with ground-based forest inventories; however, for satellite imagery to provide reliable forest inventory data, it must produce consistent results from one time period to the next. The objective of this study was to temporally validate a Landsat-based volume estimation model in a four county study...

  18. Depth Compensation Model for Gaze Estimation in Sport Analysis

    DEFF Research Database (Denmark)

    Batista Narcizo, Fabricio; Hansen, Dan Witzner

    2015-01-01

    is tested in a totally controlled environment with aim to check the influences of eye tracker parameters and ocular biometric parameters on its behavior. We also present a gaze estimation method based on epipolar geometry for binocular eye tracking setups. The depth compensation model has shown very...

  19. Models for estimation of carbon sequestered by Cupressus ...

    African Journals Online (AJOL)

    This study compared models for estimating carbon sequestered aboveground in Cupressus lusitanica plantation stands at Wondo Genet College of Forestry and Natural Resources, Ethiopia. Relationships of carbon storage with tree component and stand age were also investigated. Thirty trees of three different ages (5, ...

  20. An improved COCOMO software cost estimation model | Duke ...

    African Journals Online (AJOL)

    In this paper, we discuss the methodologies adopted previously in software cost estimation using the COnstructive COst MOdels (COCOMOs). From our analysis, COCOMOs produce very high software development efforts, which eventually produce high software development costs. Consequently, we propose its extension, ...

  1. An Approach to Quality Estimation in Model-Based Development

    DEFF Research Database (Denmark)

    Holmegaard, Jens Peter; Koch, Peter; Ravn, Anders Peter

    2004-01-01

    We present an approach to estimation of parameters for design space exploration in Model-Based Development, where synthesis of a system is done in two stages. Component qualities like space, execution time or power consumption are defined in a repository by platform dependent values. Connectors...

  2. The effects of global warming on fisheries: Simulation estimates

    Directory of Open Access Journals (Sweden)

    Carlos A. Medel

    2016-04-01

    Full Text Available This paper develops two fisheries models in order to estimate the effect of global warming (GW on firm value. GW is defined as an increase in the average temperature of the Earth’s surface as a result of emissions. It is assumed that (i GW exists, and (ii higher temperatures negatively affect biomass. CO2 The literature on biology and GW supporting these two crucial assumptions is reviewed. The main argument presented is that temperature increase has two effects on biomass, both of which have an impact on firm value. First, higher temperatures cause biomass to oscillate. To measure the effect of biomass oscillation on firm value the model in [1] is modified to include water temperature as a variable. The results indicate that a 1 to 20% variation in biomass causes firm value to fall from 6 to 44%, respectively. Second, higher temperatures reduce biomass, and a modification of the model in [2] reveals that an increase in temperature anomaly between +1 and +8°C causes fishing firm value to decrease by 8 to 10%.

  3. ORIGINAL ARTICLE Estimation of annual occupational effective ...

    African Journals Online (AJOL)

    Nagasaki nuclear bomb survivors, who have demonstrated increased ... atomic numbers as soft tissue, and their energy responses to absorbed radiation show little ... suitable thermal treatment, making them cost-effective and viable in the long ...

  4. Estimation of additive and dominance variance for reproductive traits from different models in Duroc purebred

    Directory of Open Access Journals (Sweden)

    Talerngsak Angkuraseranee

    2010-05-01

    Full Text Available The additive and dominance genetic variances of 5,801 Duroc reproductive and growth records were estimated usingBULPF90 PC-PACK. Estimates were obtained for number born alive (NBA, birth weight (BW, number weaned (NW, andweaning weight (WW. Data were analyzed using two mixed model equations. The first model included fixed effects andrandom effects identifying inbreeding depression, additive gene effect and permanent environments effects. The secondmodel was similar to the first model, but included the dominance genotypic effect. Heritability estimates of NBA, BW, NWand WW from the two models were 0.1558/0.1716, 0.1616/0.1737, 0.0372/0.0874 and 0.1584/0.1516 respectively. Proportionsof dominance effect to total phenotypic variance from the dominance model were 0.1024, 0.1625, 0.0470, and 0.1536 for NBA,BW, NW and WW respectively. Dominance effects were found to have sizable influence on the litter size traits analyzed.Therefore, genetic evaluation with the dominance model (Model 2 is found more appropriate than the animal model (Model 1.

  5. An Adjusted Discount Rate Model for Fuel Cycle Cost Estimation

    Energy Technology Data Exchange (ETDEWEB)

    Kim, S. K.; Kang, G. B.; Ko, W. I. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2013-10-15

    Owing to the diverse nuclear fuel cycle options available, including direct disposal, it is necessary to select the optimum nuclear fuel cycles in consideration of the political and social environments as well as the technical stability and economic efficiency of each country. Economic efficiency is therefore one of the significant evaluation standards. In particular, because nuclear fuel cycle cost may vary in each country, and the estimated cost usually prevails over the real cost, when evaluating the economic efficiency, any existing uncertainty needs to be removed when possible to produce reliable cost information. Many countries still do not have reprocessing facilities, and no globally commercialized HLW (High-level waste) repository is available. A nuclear fuel cycle cost estimation model is therefore inevitably subject to uncertainty. This paper analyzes the uncertainty arising out of a nuclear fuel cycle cost evaluation from the viewpoint of a cost estimation model. Compared to the same discount rate model, the nuclear fuel cycle cost of a different discount rate model is reduced because the generation quantity as denominator in Equation has been discounted. Namely, if the discount rate reduces in the back-end process of the nuclear fuel cycle, the nuclear fuel cycle cost is also reduced. Further, it was found that the cost of the same discount rate model is overestimated compared with the different discount rate model as a whole.

  6. An Adjusted Discount Rate Model for Fuel Cycle Cost Estimation

    International Nuclear Information System (INIS)

    Kim, S. K.; Kang, G. B.; Ko, W. I.

    2013-01-01

    Owing to the diverse nuclear fuel cycle options available, including direct disposal, it is necessary to select the optimum nuclear fuel cycles in consideration of the political and social environments as well as the technical stability and economic efficiency of each country. Economic efficiency is therefore one of the significant evaluation standards. In particular, because nuclear fuel cycle cost may vary in each country, and the estimated cost usually prevails over the real cost, when evaluating the economic efficiency, any existing uncertainty needs to be removed when possible to produce reliable cost information. Many countries still do not have reprocessing facilities, and no globally commercialized HLW (High-level waste) repository is available. A nuclear fuel cycle cost estimation model is therefore inevitably subject to uncertainty. This paper analyzes the uncertainty arising out of a nuclear fuel cycle cost evaluation from the viewpoint of a cost estimation model. Compared to the same discount rate model, the nuclear fuel cycle cost of a different discount rate model is reduced because the generation quantity as denominator in Equation has been discounted. Namely, if the discount rate reduces in the back-end process of the nuclear fuel cycle, the nuclear fuel cycle cost is also reduced. Further, it was found that the cost of the same discount rate model is overestimated compared with the different discount rate model as a whole

  7. Improving Frozen Precipitation Density Estimation in Land Surface Modeling

    Science.gov (United States)

    Sparrow, K.; Fall, G. M.

    2017-12-01

    The Office of Water Prediction (OWP) produces high-value water supply and flood risk planning information through the use of operational land surface modeling. Improvements in diagnosing frozen precipitation density will benefit the NWS's meteorological and hydrological services by refining estimates of a significant and vital input into land surface models. A current common practice for handling the density of snow accumulation in a land surface model is to use a standard 10:1 snow-to-liquid-equivalent ratio (SLR). Our research findings suggest the possibility of a more skillful approach for assessing the spatial variability of precipitation density. We developed a 30-year SLR climatology for the coterminous US from version 3.22 of the Daily Global Historical Climatology Network - Daily (GHCN-D) dataset. Our methods followed the approach described by Baxter (2005) to estimate mean climatological SLR values at GHCN-D sites in the US, Canada, and Mexico for the years 1986-2015. In addition to the Baxter criteria, the following refinements were made: tests were performed to eliminate SLR outliers and frequent reports of SLR = 10, a linear SLR vs. elevation trend was fitted to station SLR mean values to remove the elevation trend from the data, and detrended SLR residuals were interpolated using ordinary kriging with a spherical semivariogram model. The elevation values of each station were based on the GMTED 2010 digital elevation model and the elevation trend in the data was established via linear least squares approximation. The ordinary kriging procedure was used to interpolate the data into gridded climatological SLR estimates for each calendar month at a 0.125 degree resolution. To assess the skill of this climatology, we compared estimates from our SLR climatology with observations from the GHCN-D dataset to consider the potential use of this climatology as a first guess of frozen precipitation density in an operational land surface model. The difference in

  8. Using local multiplicity to improve effect estimation from a hypothesis-generating pharmacogenetics study.

    Science.gov (United States)

    Zou, W; Ouyang, H

    2016-02-01

    We propose a multiple estimation adjustment (MEA) method to correct effect overestimation due to selection bias from a hypothesis-generating study (HGS) in pharmacogenetics. MEA uses a hierarchical Bayesian approach to model individual effect estimates from maximal likelihood estimation (MLE) in a region jointly and shrinks them toward the regional effect. Unlike many methods that model a fixed selection scheme, MEA capitalizes on local multiplicity independent of selection. We compared mean square errors (MSEs) in simulated HGSs from naive MLE, MEA and a conditional likelihood adjustment (CLA) method that model threshold selection bias. We observed that MEA effectively reduced MSE from MLE on null effects with or without selection, and had a clear advantage over CLA on extreme MLE estimates from null effects under lenient threshold selection in small samples, which are common among 'top' associations from a pharmacogenetics HGS.

  9. Cost estimation model for advanced planetary programs, fourth edition

    Science.gov (United States)

    Spadoni, D. J.

    1983-01-01

    The development of the planetary program cost model is discussed. The Model was updated to incorporate cost data from the most recent US planetary flight projects and extensively revised to more accurately capture the information in the historical cost data base. This data base is comprised of the historical cost data for 13 unmanned lunar and planetary flight programs. The revision was made with a two fold objective: to increase the flexibility of the model in its ability to deal with the broad scope of scenarios under consideration for future missions, and to maintain and possibly improve upon the confidence in the model's capabilities with an expected accuracy of 20%. The Model development included a labor/cost proxy analysis, selection of the functional forms of the estimating relationships, and test statistics. An analysis of the Model is discussed and two sample applications of the cost model are presented.

  10. Estimating Predictive Variance for Statistical Gas Distribution Modelling

    International Nuclear Information System (INIS)

    Lilienthal, Achim J.; Asadi, Sahar; Reggente, Matteo

    2009-01-01

    Recent publications in statistical gas distribution modelling have proposed algorithms that model mean and variance of a distribution. This paper argues that estimating the predictive concentration variance entails not only a gradual improvement but is rather a significant step to advance the field. This is, first, since the models much better fit the particular structure of gas distributions, which exhibit strong fluctuations with considerable spatial variations as a result of the intermittent character of gas dispersal. Second, because estimating the predictive variance allows to evaluate the model quality in terms of the data likelihood. This offers a solution to the problem of ground truth evaluation, which has always been a critical issue for gas distribution modelling. It also enables solid comparisons of different modelling approaches, and provides the means to learn meta parameters of the model, to determine when the model should be updated or re-initialised, or to suggest new measurement locations based on the current model. We also point out directions of related ongoing or potential future research work.

  11. Eigenspace perturbations for structural uncertainty estimation of turbulence closure models

    Science.gov (United States)

    Jofre, Lluis; Mishra, Aashwin; Iaccarino, Gianluca

    2017-11-01

    With the present state of computational resources, a purely numerical resolution of turbulent flows encountered in engineering applications is not viable. Consequently, investigations into turbulence rely on various degrees of modeling. Archetypal amongst these variable resolution approaches would be RANS models in two-equation closures, and subgrid-scale models in LES. However, owing to the simplifications introduced during model formulation, the fidelity of all such models is limited, and therefore the explicit quantification of the predictive uncertainty is essential. In such scenario, the ideal uncertainty estimation procedure must be agnostic to modeling resolution, methodology, and the nature or level of the model filter. The procedure should be able to give reliable prediction intervals for different Quantities of Interest, over varied flows and flow conditions, and at diametric levels of modeling resolution. In this talk, we present and substantiate the Eigenspace perturbation framework as an uncertainty estimation paradigm that meets these criteria. Commencing from a broad overview, we outline the details of this framework at different modeling resolution. Thence, using benchmark flows, along with engineering problems, the efficacy of this procedure is established. This research was partially supported by NNSA under the Predictive Science Academic Alliance Program (PSAAP) II, and by DARPA under the Enabling Quantification of Uncertainty in Physical Systems (EQUiPS) project (technical monitor: Dr Fariba Fahroo).

  12. A comparison of estimated and calculated effective porosity

    Science.gov (United States)

    Stephens, Daniel B.; Hsu, Kuo-Chin; Prieksat, Mark A.; Ankeny, Mark D.; Blandford, Neil; Roth, Tracy L.; Kelsey, James A.; Whitworth, Julia R.

    Effective porosity in solute-transport analyses is usually estimated rather than calculated from tracer tests in the field or laboratory. Calculated values of effective porosity in the laboratory on three different textured samples were compared to estimates derived from particle-size distributions and soil-water characteristic curves. The agreement was poor and it seems that no clear relationships exist between effective porosity calculated from laboratory tracer tests and effective porosity estimated from particle-size distributions and soil-water characteristic curves. A field tracer test in a sand-and-gravel aquifer produced a calculated effective porosity of approximately 0.17. By comparison, estimates of effective porosity from textural data, moisture retention, and published values were approximately 50-90% greater than the field calibrated value. Thus, estimation of effective porosity for chemical transport is highly dependent on the chosen transport model and is best obtained by laboratory or field tracer tests. Résumé La porosité effective dans les analyses de transport de soluté est habituellement estimée, plutôt que calculée à partir d'expériences de traçage sur le terrain ou au laboratoire. Les valeurs calculées de la porosité effective au laboratoire sur trois échantillons de textures différentes ont été comparées aux estimations provenant de distributions de taille de particules et de courbes caractéristiques sol-eau. La concordance était plutôt faible et il semble qu'il n'existe aucune relation claire entre la porosité effective calculée à partir des expériences de traçage au laboratoire et la porosité effective estimée à partir des distributions de taille de particules et de courbes caractéristiques sol-eau. Une expérience de traçage de terrain dans un aquifère de sables et de graviers a fourni une porosité effective calculée d'environ 0,17. En comparaison, les estimations de porosité effective de données de

  13. Application of isotopic information for estimating parameters in Philip infiltration model

    Directory of Open Access Journals (Sweden)

    Tao Wang

    2016-10-01

    Full Text Available Minimizing parameter uncertainty is crucial in the application of hydrologic models. Isotopic information in various hydrologic components of the water cycle can expand our knowledge of the dynamics of water flow in the system, provide additional information for parameter estimation, and improve parameter identifiability. This study combined the Philip infiltration model with an isotopic mixing model using an isotopic mass balance approach for estimating parameters in the Philip infiltration model. Two approaches to parameter estimation were compared: (a using isotopic information to determine the soil water transmission and then hydrologic information to estimate the soil sorptivity, and (b using hydrologic information to determine the soil water transmission and the soil sorptivity. Results of parameter estimation were verified through a rainfall infiltration experiment in a laboratory under rainfall with constant isotopic compositions and uniform initial soil water content conditions. Experimental results showed that approach (a, using isotopic and hydrologic information, estimated the soil water transmission in the Philip infiltration model in a manner that matched measured values well. The results of parameter estimation of approach (a were better than those of approach (b. It was also found that the analytical precision of hydrogen and oxygen stable isotopes had a significant effect on parameter estimation using isotopic information.

  14. Can genetic estimators provide robust estimates of the effective number of breeders in small populations?

    Directory of Open Access Journals (Sweden)

    Marion Hoehn

    Full Text Available The effective population size (N(e is proportional to the loss of genetic diversity and the rate of inbreeding, and its accurate estimation is crucial for the monitoring of small populations. Here, we integrate temporal studies of the gecko Oedura reticulata, to compare genetic and demographic estimators of N(e. Because geckos have overlapping generations, our goal was to demographically estimate N(bI, the inbreeding effective number of breeders and to calculate the N(bI/N(a ratio (N(a =number of adults for four populations. Demographically estimated N(bI ranged from 1 to 65 individuals. The mean reduction in the effective number of breeders relative to census size (N(bI/N(a was 0.1 to 1.1. We identified the variance in reproductive success as the most important variable contributing to reduction of this ratio. We used four methods to estimate the genetic based inbreeding effective number of breeders N(bI(gen and the variance effective populations size N(eV(gen estimates from the genotype data. Two of these methods - a temporal moment-based (MBT and a likelihood-based approach (TM3 require at least two samples in time, while the other two were single-sample estimators - the linkage disequilibrium method with bias correction LDNe and the program ONeSAMP. The genetic based estimates were fairly similar across methods and also similar to the demographic estimates excluding those estimates, in which upper confidence interval boundaries were uninformative. For example, LDNe and ONeSAMP estimates ranged from 14-55 and 24-48 individuals, respectively. However, temporal methods suffered from a large variation in confidence intervals and concerns about the prior information. We conclude that the single-sample estimators are an acceptable short-cut to estimate N(bI for species such as geckos and will be of great importance for the monitoring of species in fragmented landscapes.

  15. Estimation of Biological Effects of Tritium.

    Science.gov (United States)

    Umata, Toshiyuki

    2017-01-01

    Nuclear fusion technology is expected to create new energy in the future. However, nuclear fusion requires a large amount of tritium as a fuel, leading to concern about the exposure of radiation workers to tritium beta radiation. Furthermore, countermeasures for tritium-polluted water produced in decommissioning of the reactor at Fukushima Daiichi Nuclear Power Station may potentially cause health problems in radiation workers. Although, internal exposure to tritium at a low dose/low dose rate can be assumed, biological effect of tritium exposure is not negligible, because tritiated water (HTO) intake to the body via the mouth/inhalation/skin would lead to homogeneous distribution throughout the whole body. Furthermore, organically-bound tritium (OBT) stays in the body as parts of the molecules that comprise living organisms resulting in long-term exposure, and the chemical form of tritium should be considered. To evaluate the biological effect of tritium, the effect should be compared with that of other radiation types. Many studies have examined the relative biological effectiveness (RBE) of tritium. Hence, we report the RBE, which was obtained with radiation carcinogenesis classified as a stochastic effect, and serves as a reference for cancer risk. We also introduce the outline of the tritium experiment and the principle of a recently developed animal experimental system using transgenic mouse to detect the biological influence of radiation exposure at a low dose/low dose rate.

  16. Bayes estimation of the general hazard rate model

    International Nuclear Information System (INIS)

    Sarhan, A.

    1999-01-01

    In reliability theory and life testing models, the life time distributions are often specified by choosing a relevant hazard rate function. Here a general hazard rate function h(t)=a+bt c-1 , where c, a, b are constants greater than zero, is considered. The parameter c is assumed to be known. The Bayes estimators of (a,b) based on the data of type II/item-censored testing without replacement are obtained. A large simulation study using Monte Carlo Method is done to compare the performance of Bayes with regression estimators of (a,b). The criterion for comparison is made based on the Bayes risk associated with the respective estimator. Also, the influence of the number of failed items on the accuracy of the estimators (Bayes and regression) is investigated. Estimations for the parameters (a,b) of the linearly increasing hazard rate model h(t)=a+bt, where a, b are greater than zero, can be obtained as the special case, letting c=2

  17. System health monitoring using multiple-model adaptive estimation techniques

    Science.gov (United States)

    Sifford, Stanley Ryan

    Monitoring system health for fault detection and diagnosis by tracking system parameters concurrently with state estimates is approached using a new multiple-model adaptive estimation (MMAE) method. This novel method is called GRid-based Adaptive Parameter Estimation (GRAPE). GRAPE expands existing MMAE methods by using new techniques to sample the parameter space. GRAPE expands on MMAE with the hypothesis that sample models can be applied and resampled without relying on a predefined set of models. GRAPE is initially implemented in a linear framework using Kalman filter models. A more generalized GRAPE formulation is presented using extended Kalman filter (EKF) models to represent nonlinear systems. GRAPE can handle both time invariant and time varying systems as it is designed to track parameter changes. Two techniques are presented to generate parameter samples for the parallel filter models. The first approach is called selected grid-based stratification (SGBS). SGBS divides the parameter space into equally spaced strata. The second approach uses Latin Hypercube Sampling (LHS) to determine the parameter locations and minimize the total number of required models. LHS is particularly useful when the parameter dimensions grow. Adding more parameters does not require the model count to increase for LHS. Each resample is independent of the prior sample set other than the location of the parameter estimate. SGBS and LHS can be used for both the initial sample and subsequent resamples. Furthermore, resamples are not required to use the same technique. Both techniques are demonstrated for both linear and nonlinear frameworks. The GRAPE framework further formalizes the parameter tracking process through a general approach for nonlinear systems. These additional methods allow GRAPE to either narrow the focus to converged values within a parameter range or expand the range in the appropriate direction to track the parameters outside the current parameter range boundary

  18. Gridded rainfall estimation for distributed modeling in western mountainous areas

    Science.gov (United States)

    Moreda, F.; Cong, S.; Schaake, J.; Smith, M.

    2006-05-01

    Estimation of precipitation in mountainous areas continues to be problematic. It is well known that radar-based methods are limited due to beam blockage. In these areas, in order to run a distributed model that accounts for spatially variable precipitation, we have generated hourly gridded rainfall estimates from gauge observations. These estimates will be used as basic data sets to support the second phase of the NWS-sponsored Distributed Hydrologic Model Intercomparison Project (DMIP 2). One of the major foci of DMIP 2 is to better understand the modeling and data issues in western mountainous areas in order to provide better water resources products and services to the Nation. We derive precipitation estimates using three data sources for the period of 1987-2002: 1) hourly cooperative observer (coop) gauges, 2) daily total coop gauges and 3) SNOw pack TELemetry (SNOTEL) daily gauges. The daily values are disaggregated using the hourly gauge values and then interpolated to approximately 4km grids using an inverse-distance method. Following this, the estimates are adjusted to match monthly mean values from the Parameter-elevation Regressions on Independent Slopes Model (PRISM). Several analyses are performed to evaluate the gridded estimates for DMIP 2 experiments. These gridded inputs are used to generate mean areal precipitation (MAPX) time series for comparison to the traditional mean areal precipitation (MAP) time series derived by the NWS' California-Nevada River Forecast Center for model calibration. We use two of the DMIP 2 basins in California and Nevada: the North Fork of the American River (catchment area 885 sq. km) and the East Fork of the Carson River (catchment area 922 sq. km) as test areas. The basins are sub-divided into elevation zones. The North Fork American basin is divided into two zones above and below an elevation threshold. Likewise, the Carson River basin is subdivided in to four zones. For each zone, the analyses include: a) overall

  19. Online state of charge and model parameter co-estimation based on a novel multi-timescale estimator for vanadium redox flow battery

    International Nuclear Information System (INIS)

    Wei, Zhongbao; Lim, Tuti Mariana; Skyllas-Kazacos, Maria; Wai, Nyunt; Tseng, King Jet

    2016-01-01

    Highlights: • Battery model parameters and SOC co-estimation is investigated. • The model parameters and OCV are decoupled and estimated independently. • Multiple timescales are adopted to improve precision and stability. • SOC is online estimated without using the open-circuit cell. • The method is robust to aging levels, flow rates, and battery chemistries. - Abstract: A key function of battery management system (BMS) is to provide accurate information of the state of charge (SOC) in real time, and this depends directly on the precise model parameterization. In this paper, a novel multi-timescale estimator is proposed to estimate the model parameters and SOC for vanadium redox flow battery (VRB) in real time. The model parameters and OCV are decoupled and estimated independently, effectively avoiding the possibility of cross interference between them. The analysis of model sensitivity, stability, and precision suggests the necessity of adopting different timescales for each estimator independently. Experiments are conducted to assess the performance of the proposed method. Results reveal that the model parameters are online adapted accurately thus the periodical calibration on them can be avoided. The online estimated terminal voltage and SOC are both benchmarked with the reference values. The proposed multi-timescale estimator has the merits of fast convergence, high precision, and good robustness against the initialization uncertainty, aging states, flow rates, and also battery chemistries.

  20. A Model of Gravity Vector Measurement Noise for Estimating Accelerometer Bias in Gravity Disturbance Compensation

    Science.gov (United States)

    Cao, Juliang; Cai, Shaokun; Wu, Meiping; Lian, Junxiang

    2018-01-01

    Compensation of gravity disturbance can improve the precision of inertial navigation, but the effect of compensation will decrease due to the accelerometer bias, and estimation of the accelerometer bias is a crucial issue in gravity disturbance compensation. This paper first investigates the effect of accelerometer bias on gravity disturbance compensation, and the situation in which the accelerometer bias should be estimated is established. The accelerometer bias is estimated from the gravity vector measurement, and a model of measurement noise in gravity vector measurement is built. Based on this model, accelerometer bias is separated from the gravity vector measurement error by the method of least squares. Horizontal gravity disturbances are calculated through EGM2008 spherical harmonic model to build the simulation scene, and the simulation results indicate that precise estimations of the accelerometer bias can be obtained with the proposed method. PMID:29547552